text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Lod#cite_note-62] | [TOKENS: 4733] |
Contents Lod Lod (Hebrew: לוד, fully vocalized: לֹד), also known as Lydda (Ancient Greek: Λύδδα) and Lidd (Arabic: اللِّدّ, romanized: al-Lidd, or اللُّدّ, al-Ludd), is a city 15 km (9+1⁄2 mi) southeast of Tel Aviv and 40 km (25 mi) northwest of Jerusalem in the Central District of Israel. It is situated between the lower Shephelah on the east and the coastal plain on the west. The city had a population of 90,814 in 2023. Lod has been inhabited since at least the Neolithic period. It is mentioned a few times in the Hebrew Bible and in the New Testament. Between the 5th century BCE and up until the late Roman period, it was a prominent center for Jewish scholarship and trade. Around 200 CE, the city became a Roman colony and was renamed Diospolis (Ancient Greek: Διόσπολις, lit. 'city of Zeus'). Tradition identifies Lod as the 4th century martyrdom site of Saint George; the Church of Saint George and Mosque of Al-Khadr located in the city is believed to have housed his remains. Following the Arab conquest of the Levant, Lod served as the capital of Jund Filastin; however, a few decades later, the seat of power was transferred to Ramla, and Lod slipped in importance. Under Crusader rule, the city was a Catholic diocese of the Latin Church and it remains a titular see to this day.[citation needed] Lod underwent a major change in its population in the mid-20th century. Exclusively Palestinian Arab in 1947, Lod was part of the area designated for an Arab state in the United Nations Partition Plan for Palestine; however, in July 1948, the city was occupied by the Israel Defense Forces, and most of its Arab inhabitants were expelled in the Palestinian expulsion from Lydda and Ramle. The city was largely resettled by Jewish immigrants, most of them expelled from Arab countries. Today, Lod is one of Israel's mixed cities, with an Arab population of 30%. Lod is one of Israel's major transportation hubs. The main international airport, Ben Gurion Airport, is located 8 km (5 miles) north of the city. The city is also a major railway and road junction. Religious references The Hebrew name Lod appears in the Hebrew Bible as a town of Benjamin, founded along with Ono by Shamed or Shamer (1 Chronicles 8:12; Ezra 2:33; Nehemiah 7:37; 11:35). In Ezra 2:33, it is mentioned as one of the cities whose inhabitants returned after the Babylonian captivity. Lod is not mentioned among the towns allocated to the tribe of Benjamin in Joshua 18:11–28. The name Lod derives from a tri-consonantal root not extant in Northwest Semitic, but only in Arabic (“to quarrel; withhold, hinder”). An Arabic etymology of such an ancient name is unlikely (the earliest attestation is from the Achaemenid period). In the New Testament, the town appears in its Greek form, Lydda, as the site of Peter's healing of Aeneas in Acts 9:32–38. The city is also mentioned in an Islamic hadith as the location of the battlefield where the false messiah (al-Masih ad-Dajjal) will be slain before the Day of Judgment. History The first occupation dates to the Neolithic in the Near East and is associated with the Lodian culture. Occupation continued in the Levant Chalcolithic. Pottery finds have dated the initial settlement in the area now occupied by the town to 5600–5250 BCE. In the Early Bronze, it was an important settlement in the central coastal plain between the Judean Shephelah and the Mediterranean coast, along Nahal Ayalon. Other important nearby sites were Tel Dalit, Tel Bareqet, Khirbat Abu Hamid (Shoham North), Tel Afeq, Azor and Jaffa. Two architectural phases belong to the late EB I in Area B. The first phase had a mudbrick wall, while the late phase included a circulat stone structure. Later excavations have produced an occupation later, Stratum IV. It consists of two phases, Stratum IVb with mudbrick wall on stone foundations and rounded exterior corners. In Stratum IVa there was a mudbrick wall with no stone foundations, with imported Egyptian potter and local pottery imitations. Another excavations revealed nine occupation strata. Strata VI-III belonged to Early Bronze IB. The material culture showed Egyptian imports in strata V and IV. Occupation continued into Early Bronze II with four strata (V-II). There was continuity in the material culture and indications of centralized urban planning. North to the tell were scattered MB II burials. The earliest written record is in a list of Canaanite towns drawn up by the Egyptian pharaoh Thutmose III at Karnak in 1465 BCE. From the fifth century BCE until the Roman period, the city was a centre of Jewish scholarship and commerce. According to British historian Martin Gilbert, during the Hasmonean period, Jonathan Maccabee and his brother, Simon Maccabaeus, enlarged the area under Jewish control, which included conquering the city. The Jewish community in Lod during the Mishnah and Talmud era is described in a significant number of sources, including information on its institutions, demographics, and way of life. The city reached its height as a Jewish center between the First Jewish-Roman War and the Bar Kokhba revolt, and again in the days of Judah ha-Nasi and the start of the Amoraim period. The city was then the site of numerous public institutions, including schools, study houses, and synagogues. In 43 BC, Cassius, the Roman governor of Syria, sold the inhabitants of Lod into slavery, but they were set free two years later by Mark Antony. During the First Jewish–Roman War, the Roman proconsul of Syria, Cestius Gallus, razed the town on his way to Jerusalem in Tishrei 66 CE. According to Josephus, "[he] found the city deserted, for the entire population had gone up to Jerusalem for the Feast of Tabernacles. He killed fifty people whom he found, burned the town and marched on". Lydda was occupied by Emperor Vespasian in 68 CE. In the period following the destruction of Jerusalem in 70 CE, Rabbi Tarfon, who appears in many Tannaitic and Jewish legal discussions, served as a rabbinic authority in Lod. During the Kitos War, 115–117 CE, the Roman army laid siege to Lod, where the rebel Jews had gathered under the leadership of Julian and Pappos. Torah study was outlawed by the Romans and pursued mostly in the underground. The distress became so great, the patriarch Rabban Gamaliel II, who was shut up there and died soon afterwards, permitted fasting on Ḥanukkah. Other rabbis disagreed with this ruling. Lydda was next taken and many of the Jews were executed; the "slain of Lydda" are often mentioned in words of reverential praise in the Talmud. In 200 CE, emperor Septimius Severus elevated the town to the status of a city, calling it Colonia Lucia Septimia Severa Diospolis. The name Diospolis ("City of Zeus") may have been bestowed earlier, possibly by Hadrian. At that point, most of its inhabitants were Christian. The earliest known bishop is Aëtius, a friend of Arius. During the following century (200-300CE), it's said that Joshua ben Levi founded a yeshiva in Lod. In December 415, the Council of Diospolis was held here to try Pelagius; he was acquitted. In the sixth century, the city was renamed Georgiopolis after St. George, a soldier in the guard of the emperor Diocletian, who was born there between 256 and 285 CE. The Church of Saint George and Mosque of Al-Khadr is named for him. The 6th-century Madaba map shows Lydda as an unwalled city with a cluster of buildings under a black inscription reading "Lod, also Lydea, also Diospolis". An isolated large building with a semicircular colonnaded plaza in front of it might represent the St George shrine. After the Muslim conquest of Palestine by Amr ibn al-'As in 636 CE, Lod which was referred to as "al-Ludd" in Arabic served as the capital of Jund Filastin ("Military District of Palaestina") before the seat of power was moved to nearby Ramla during the reign of the Umayyad Caliph Suleiman ibn Abd al-Malik in 715–716. The population of al-Ludd was relocated to Ramla, as well. With the relocation of its inhabitants and the construction of the White Mosque in Ramla, al-Ludd lost its importance and fell into decay. The city was visited by the local Arab geographer al-Muqaddasi in 985, when it was under the Fatimid Caliphate, and was noted for its Great Mosque which served the residents of al-Ludd, Ramla, and the nearby villages. He also wrote of the city's "wonderful church (of St. George) at the gate of which Christ will slay the Antichrist." The Crusaders occupied the city in 1099 and named it St Jorge de Lidde. It was briefly conquered by Saladin, but retaken by the Crusaders in 1191. For the English Crusaders, it was a place of great significance as the birthplace of Saint George. The Crusaders made it the seat of a Latin Church diocese, and it remains a titular see. It owed the service of 10 knights and 20 sergeants, and it had its own burgess court during this era. In 1226, Ayyubid Syrian geographer Yaqut al-Hamawi visited al-Ludd and stated it was part of the Jerusalem District during Ayyubid rule. Sultan Baybars brought Lydda again under Muslim control by 1267–8. According to Qalqashandi, Lydda was an administrative centre of a wilaya during the fourteenth and fifteenth century in the Mamluk empire. Mujir al-Din described it as a pleasant village with an active Friday mosque. During this time, Lydda was a station on the postal route between Cairo and Damascus. In 1517, Lydda was incorporated into the Ottoman Empire as part of the Damascus Eyalet, and in the 1550s, the revenues of Lydda were designated for the new waqf of Hasseki Sultan Imaret in Jerusalem, established by Hasseki Hurrem Sultan (Roxelana), the wife of Suleiman the Magnificent. By 1596 Lydda was a part of the nahiya ("subdistrict") of Ramla, which was under the administration of the liwa ("district") of Gaza. It had a population of 241 households and 14 bachelors who were all Muslims, and 233 households who were Christians. They paid a fixed tax-rate of 33,3 % on agricultural products, including wheat, barley, summer crops, vineyards, fruit trees, sesame, special product ("dawalib" =spinning wheels), goats and beehives, in addition to occasional revenues and market toll, a total of 45,000 Akçe. All of the revenue went to the Waqf. In 1051 AH/1641/2, the Bedouin tribe of al-Sawālima from around Jaffa attacked the villages of Subṭāra, Bayt Dajan, al-Sāfiriya, Jindās, Lydda and Yāzūr belonging to Waqf Haseki Sultan. The village appeared as Lydda, though misplaced, on the map of Pierre Jacotin compiled in 1799. Missionary William M. Thomson visited Lydda in the mid-19th century, describing it as a "flourishing village of some 2,000 inhabitants, imbosomed in noble orchards of olive, fig, pomegranate, mulberry, sycamore, and other trees, surrounded every way by a very fertile neighbourhood. The inhabitants are evidently industrious and thriving, and the whole country between this and Ramleh is fast being filled up with their flourishing orchards. Rarely have I beheld a rural scene more delightful than this presented in early harvest ... It must be seen, heard, and enjoyed to be appreciated." In 1869, the population of Ludd was given as: 55 Catholics, 1,940 "Greeks", 5 Protestants and 4,850 Muslims. In 1870, the Church of Saint George was rebuilt. In 1892, the first railway station in the entire region was established in the city. In the second half of the 19th century, Jewish merchants migrated to the city, but left after the 1921 Jaffa riots. In 1882, the Palestine Exploration Fund's Survey of Western Palestine described Lod as "A small town, standing among enclosure of prickly pear, and having fine olive groves around it, especially to the south. The minaret of the mosque is a very conspicuous object over the whole of the plain. The inhabitants are principally Moslim, though the place is the seat of a Greek bishop resident of Jerusalem. The Crusading church has lately been restored, and is used by the Greeks. Wells are found in the gardens...." From 1918, Lydda was under the administration of the British Mandate in Palestine, as per a League of Nations decree that followed the Great War. During the Second World War, the British set up supply posts in and around Lydda and its railway station, also building an airport that was renamed Ben Gurion Airport after the death of Israel's first prime minister in 1973. At the time of the 1922 census of Palestine, Lydda had a population of 8,103 inhabitants (7,166 Muslims, 926 Christians, and 11 Jews), the Christians were 921 Orthodox, 4 Roman Catholics and 1 Melkite. This had increased by the 1931 census to 11,250 (10,002 Muslims, 1,210 Christians, 28 Jews, and 10 Bahai), in a total of 2475 residential houses. In 1938, Lydda had a population of 12,750. In 1945, Lydda had a population of 16,780 (14,910 Muslims, 1,840 Christians, 20 Jews and 10 "other"). Until 1948, Lydda was an Arab town with a population of around 20,000—18,500 Muslims and 1,500 Christians. In 1947, the United Nations proposed dividing Mandatory Palestine into two states, one Jewish state and one Arab; Lydda was to form part of the proposed Arab state. In the ensuing war, Israel captured Arab towns outside the area the UN had allotted it, including Lydda. In December 1947, thirteen Jewish passengers in a seven-car convoy to Ben Shemen Youth Village were ambushed and murdered.In a separate incident, three Jewish youths, two men and a woman were captured, then raped and murdered in a neighbouring village. Their bodies were paraded in Lydda’s principal street. The Israel Defense Forces entered Lydda on 11 July 1948. The following day, under the impression that it was under attack, the 3rd Battalion was ordered to shoot anyone "seen on the streets". According to Israel, 250 Arabs were killed. Other estimates are higher: Arab historian Aref al Aref estimated 400, and Nimr al Khatib 1,700. In 1948, the population rose to 50,000 during the Nakba, as Arab refugees fleeing other areas made their way there. A key event was the Palestinian expulsion from Lydda and Ramle, with the expulsion of 50,000-70,000 Palestinians from Lydda and Ramle by the Israel Defense Forces. All but 700 to 1,056 were expelled by order of the Israeli high command, and forced to walk 17 km (10+1⁄2 mi) to the Jordanian Arab Legion lines. Estimates of those who died from exhaustion and dehydration vary from a handful to 355. The town was subsequently sacked by the Israeli army. Some scholars, including Ilan Pappé, characterize this as ethnic cleansing. The few hundred Arabs who remained in the city were soon outnumbered by the influx of Jews who immigrated to Lod from August 1948 onward, most of them from Arab countries. As a result, Lod became a predominantly Jewish town. After the establishment of the state, the biblical name Lod was readopted. The Jewish immigrants who settled Lod came in waves, first from Morocco and Tunisia, later from Ethiopia, and then from the former Soviet Union. Since 2008, many urban development projects have been undertaken to improve the image of the city. Upscale neighbourhoods have been built, among them Ganei Ya'ar and Ahisemah, expanding the city to the east. According to a 2010 report in the Economist, a three-meter-high wall was built between Jewish and Arab neighbourhoods and construction in Jewish areas was given priority over construction in Arab neighborhoods. The newspaper says that violent crime in the Arab sector revolves mainly around family feuds over turf and honour crimes. In 2010, the Lod Community Foundation organised an event for representatives of bicultural youth movements, volunteer aid organisations, educational start-ups, businessmen, sports organizations, and conservationists working on programmes to better the city. In the 2021 Israel–Palestine crisis, a state of emergency was declared in Lod after Arab rioting led to the death of an Israeli Jew. The Mayor of Lod, Yair Revivio, urged Prime Minister of Israel Benjamin Netanyahu to deploy Israel Border Police to restore order in the city. This was the first time since 1966 that Israel had declared this kind of emergency lockdown. International media noted that both Jewish and Palestinian mobs were active in Lod, but the "crackdown came for one side" only. Demographics In the 19th century and until the Lydda Death March, Lod was an exclusively Muslim-Christian town, with an estimated 6,850 inhabitants, of whom approximately 2,000 (29%) were Christian. According to the Israel Central Bureau of Statistics (CBS), the population of Lod in 2010 was 69,500 people. According to the 2019 census, the population of Lod was 77,223, of which 53,581 people, comprising 69.4% of the city's population, were classified as "Jews and Others", and 23,642 people, comprising 30.6% as "Arab". Education According to CBS, 38 schools and 13,188 pupils are in the city. They are spread out as 26 elementary schools and 8,325 elementary school pupils, and 13 high schools and 4,863 high school pupils. About 52.5% of 12th-grade pupils were entitled to a matriculation certificate in 2001.[citation needed] Economy The airport and related industries are a major source of employment for the residents of Lod. Other important factories in the city are the communication equipment company "Talard", "Cafe-Co" - a subsidiary of the Strauss Group and "Kashev" - the computer center of Bank Leumi. A Jewish Agency Absorption Centre is also located in Lod. According to CBS figures for 2000, 23,032 people were salaried workers and 1,405 were self-employed. The mean monthly wage for a salaried worker was NIS 4,754, a real change of 2.9% over the course of 2000. Salaried men had a mean monthly wage of NIS 5,821 (a real change of 1.4%) versus NIS 3,547 for women (a real change of 4.6%). The mean income for the self-employed was NIS 4,991. About 1,275 people were receiving unemployment benefits and 7,145 were receiving an income supplement. Art and culture In 2009-2010, Dor Guez held an exhibit, Georgeopolis, at the Petach Tikva art museum that focuses on Lod. Archaeology A well-preserved mosaic floor dating to the Roman period was excavated in 1996 as part of a salvage dig conducted on behalf of the Israel Antiquities Authority and the Municipality of Lod, prior to widening HeHalutz Street. According to Jacob Fisch, executive director of the Friends of the Israel Antiquities Authority, a worker at the construction site noticed the tail of a tiger and halted work. The mosaic was initially covered over with soil at the conclusion of the excavation for lack of funds to conserve and develop the site. The mosaic is now part of the Lod Mosaic Archaeological Center. The floor, with its colorful display of birds, fish, exotic animals and merchant ships, is believed to have been commissioned by a wealthy resident of the city for his private home. The Lod Community Archaeology Program, which operates in ten Lod schools, five Jewish and five Israeli Arab, combines archaeological studies with participation in digs in Lod. Sports The city's major football club, Hapoel Bnei Lod, plays in Liga Leumit (the second division). Its home is at the Lod Municipal Stadium. The club was formed by a merger of Bnei Lod and Rakevet Lod in the 1980s. Two other clubs in the city play in the regional leagues: Hapoel MS Ortodoxim Lod in Liga Bet and Maccabi Lod in Liga Gimel. Hapoel Lod played in the top division during the 1960s and 1980s, and won the State Cup in 1984. The club folded in 2002. A new club, Hapoel Maxim Lod (named after former mayor Maxim Levy) was established soon after, but folded in 2007. Notable people Twin towns-sister cities Lod is twinned with: See also References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Kingston,_Ontario] | [TOKENS: 11721] |
Contents Kingston, Ontario Kingston is a city in Ontario, Canada, on the northeastern end of Lake Ontario. It is at the beginning of the St. Lawrence River and at the mouth of the Cataraqui River, the south end of the Rideau Canal. Kingston is near the Thousand Islands, a tourist region to the east, and the Prince Edward County tourist region to the west. Kingston is nicknamed the "Limestone City" because it has many heritage buildings constructed using local limestone. Growing European exploration in the 17th century and the desire for the Europeans to establish a presence close to local Native occupants to control trade led to the founding of a French trading post and military fort at a site known as "Cataraqui" (generally pronounced /kætəˈrɒkweɪ/ ka-tə-ROK-way) in 1673. The outpost, called Fort Cataraqui, and later Fort Frontenac, became a focus for settlement. After the Conquest of New France (1759–1763), the site of Kingston was relinquished to the British. Cataraqui was renamed Kingston after the British took possession of the fort, and Loyalists began settling the region in the 1780s. Kingston was named the first capital of the United Province of Canada on February 10, 1841. While its time as a capital city was short and ended in 1844, the community has remained an important military installation. The city is a regional centre of education and health care, being home to a major university, a large vocational college, and three major hospitals. Kingston was the county seat of Frontenac County until 1998. Kingston is now a separate municipality from the County of Frontenac. Kingston is the largest municipality in southeastern Ontario and Ontario's 10th largest metropolitan area. John A. Macdonald, the first prime minister of Canada, lived in Kingston. History Cataraqui, Kingston's original name, is a derivation of an Indigenous language name for the Kingston area. The word may mean "Great Meeting Place", "the place where one hides", "impregnable", "muddy river", "place of retreat", "clay bank rising out of the water", "where the rivers and lake meet", "rocks standing in water", or "place where the limestone (or clay) is". Cataraqui was referred to as "the King's Town" or "King's Town" by 1787, in honour of King George III. The name was abbreviated to "Kingston" in 1788. Cataraqui today is an area around the intersection of Princess Street and Sydenham Road, where the village of Cataraqui (formerly known as Waterloo) was located. Cataraqui is also the name of a municipal electoral district. Archaeological evidence suggests people lived in the Kingston region as early as the Archaic period (about 9,000–3,000 years ago). Evidence of Late Woodland Period (about 500–1000 AD) early Iroquois occupation also exists. The first more permanent encampments by Indigenous people in the Kingston area began about 900 AD. The group that first occupied the area before the arrival of the French was probably the Wyandot people (Hurons), who were later displaced by Iroquoian groups. At the time the French arrived in the Kingston area, Five Nations Iroquois (Haudenosaunee) had settled along the north shore of Lake Ontario. Although the area around the south end of the Cataraqui River was often visited by Iroquois and other groups, Iroquois settlement at this location only began after the French established their outpost. By 1700, the north shore Iroquois had moved south, and the area once occupied by the Iroquois (which includes Kingston) became occupied by the Mississaugas, a subtribe of the Anishinaabe, who had moved south from the Lake Huron and Lake Simcoe regions. European commercial and military influence and activities centred on the fur trade developed and increased in North America in the 17th century. Fur trappers and traders were spreading out from their centres of operation in New France. French explorer Samuel de Champlain visited the Kingston area in 1615. To establish a presence on Lake Ontario for the purpose of controlling the fur trade with local indigenous people, Louis de Buade de Frontenac, Governor of New France, established Fort Cataraqui, later to be called Fort Frontenac, at a location known as Cataraqui in 1673. The fort served as a trading post and military base, and gradually attracted indigenous and European settlement. In 1674, René-Robert Cavelier, Sieur de La Salle was appointed commandant of the fort. From this base, de La Salle explored west and south as far as the Gulf of Mexico. The fort was rebuilt several times and experienced periods of abandonment. The Iroquois siege of 1688 led to many deaths, after which the French destroyed the fort, but would rebuild it. The British destroyed the fort during the Battle of Fort Frontenac (Seven Years' War) in 1758 and its ruins remained abandoned until the British took possession and partially reconstructed it in 1783. The fort was renamed Tête-de-Pont Barracks in 1787. It was turned over to the Canadian military in 1870–71 and is still being used by the military. It was renamed Fort Frontenac in 1939. Partially reconstructed parts of the original fort can be seen today at the western end of the La Salle Causeway. In 1783, Frederick Haldimand, governor of the Province of Quebec directed Major Samuel Holland, Surveyor-General of Quebec, to lay out a settlement for displaced British colonists, or Loyalists, who were fleeing north because of the American Revolutionary War and "minutely examine the situation and site of the Post formerly occupied by the French, and the land and country adjacent". Haldimand had originally considered the site as a possible location to settle loyal Mohawks. The survey would also determine whether Cataraqui was suitable as a navy base since nearby Carleton Island on which a British navy base was located had been ceded to the Americans after the war. Holland's report about the old French post mentioned "every part surpassed the favorable idea I had formed of it", that it had "advantageous Situations" and that "the harbour is in every respect Good and most conveniently situated to command Lake Ontario". Major John Ross, commanding officer of the King's Royal Regiment of New York at Oswego partly rebuilt Fort Frontenac in 1783. As commander, he played a significant role in establishing the Cataraqui settlement. To facilitate settlement, the British Crown entered into an agreement with the Mississaugas in October 1783 to purchase land east of the Bay of Quinte. Known as the Crawford Purchase, this agreement enabled settlement for much of the eastern section of the north shore of Lake Ontario. With the completion of the Mississauga agreement, settlement could proceed, although the planning of the layout of the townsite had not waited for the completion of the negotiations. The area was surveyed, and the survey report mentioned the area was deemed to have productive lands, abundant resources, a good harbour and an existing townsite. These requirements were considered ideal to settle the Loyalists. Three kinds of refugee Loyalists would settle at Cataraqui: 'associated' or 'incorporated' Loyalists who were organized into companies under militia officers, provincial colonial regiments and their dependants, and unincorporated Loyalists who came to Canada independently. Many Loyalist refugees had at first settled on Carleton Island, and operated businesses there. When the Island was ceded to the United States after the Revolutionary War, these Loyalists, along with their businesses, relocated to Cataraqui. Notable Loyalists who settled in the Cataraqui area include Molly Brant (the sister of Six Nations leader Joseph Brant); businessman and political figure Richard Cartwright; John Stuart, a clergyman, missionary and educator who arrived in 1785; and militia captain Johan Jost Herkimer. The first name given to the settlement by the Loyalists was King's Town, which would eventually develop into the current appellation. The first child born in King's Town was John Godfrey Lloyd, son of a discharged Hessian soldier from Herkimer County, New York, Johan Gottlieb Lloyd, and his wife Mary Klein. Klein was one of six daughters of John Cline, who was killed during the British-supported Seneca raids in the Mohawk Valley, New York, in May 1780. Klein was brought to Carleton Island with Molly Brant, and arrived in Cataraqui in 1783, before the influx of Loyalist settlers in 1784. A group of Loyalists from New York State, led by Captain Michael Grass who arrived in 1784 after sailing from New York City and up the St. Lawrence River, established a camp south of Fort Frontenac at Mississauga Point. The first high school (grammar school) in what later became the province of Ontario was established in Kingston in 1792 by Loyalist priest John Stuart, which evolved into Kingston Collegiate and Vocational Institute. During the War of 1812, Kingston (with a population of 2,250) was a major military centre. It was the base for the Lake Ontario division of the British naval fleet on the Great Lakes, which engaged in a vigorous arms race with the American fleet based at Sackets Harbor, New York for control of Lake Ontario. The Provincial Marine quickly placed ships into service and troops were brought in. A Royal Naval detachment built warships in order to control Lake Ontario. Fortifications and other defensive structures were built. The first Fort Henry was built during this time to protect the dockyards in Navy Bay. This fort was replaced by a more extensive fort on Point Henry in 1813. The present limestone citadel, constructed between 1832 and 1836, was intended to defend the recently completed Rideau Canal (opened in 1832) at the Lake Ontario end as well as the harbour and the naval dockyard. In 1843, the advanced battery overlooking the lake to the south was completed when the casemated commissariat stores and magazines were built. Fort Henry was garrisoned by British until 1871. It was restored starting in 1936 and is a popular tourist attraction, now part of a World Heritage Site. Kingston's location at the Rideau Canal entrance to Lake Ontario made it the primary military and economic centre of Upper Canada after canal construction was completed in 1832. It was incorporated as a town in 1838; the first mayor of Kingston was Thomas Kirkpatrick. Kingston had the largest population of any centre in Upper Canada until the 1840s. Kingston was incorporated as a city in 1846. Kingston became an important port as businesses relating to transshipment, or forwarding, grew. Since Kingston was at the junction of the St. Lawrence River and Lake Ontario, commodities shipped along the lake from the west such as wheat, flour, meat, and potash were unloaded and stored at Kingston to await transfer to vessels that could navigate the risky St. Lawrence. With the completion of the Rideau Canal, cargoes could be transported in a safer fashion since the St. Lawrence River route could be bypassed. The canal was a popular route for transporting lumber. Regiopolis College (for training priests) was incorporated in March 1837, and in 1866 the college was given full degree-granting powers, although these were rarely used and the college closed in 1869. The building became the Hotel Dieu Hospital in 1892. The college reopened at another location in 1896. Queen's University, originally Queen's College, one of the first liberal arts universities, first held classes in March 1842; established by the Presbyterian Church, it later became a national institution. The Royal Military College of Canada (RMC) was founded in 1876. Kingston Penitentiary, Canada's first large federal penitentiary, was established in 1835 and operated until 2013. Several more prisons would be established in later years in the greater Kingston area, including the federal Prison for Women (1930, closed in the 1990s), Millhaven Institution, Collins Bay Institution, Frontenac (which amalgamated with Collins Bay in 2013), and Joyceville Institution. During the Upper Canada Rebellion, 1837–38, much of the local militia was posted in Kingston, under Lieutenant-colonel Richard Henry Bonnycastle who completed construction of the new Fort Henry. Lord Sydenham, the Governor General of Canada chose Kingston as the first capital of the united Canadas, and it served in that role from 1841 to 1844. The first meeting of the Parliament of the Province of Canada on June 13, 1841, was held on the site of what is now Kingston General Hospital. The city was considered too small and lacking in amenities, however, and its location near the border made it vulnerable to American attack. Consequently, the capital was moved to Montreal in 1844, and it alternated between Quebec City and Toronto from 1849 until Ottawa, then a small lumber village known as Bytown, was selected as the permanent capital by Queen Victoria. Subsequently, Kingston's growth slowed considerably and its national importance declined. In 1846, with a population of 6,123, Kingston was incorporated as a city, with John Counter as the first mayor. By that time, there were stone buildings, both residential and commercial. The market house was particularly noteworthy as "the finest and most substantial building in Canada" which contained many offices, government offices, space for church services, the post office, the City Hall (completed in 1844) and more. Five weekly newspapers were being published. Fort Henry and the marine barracks took up a great deal of space. Kingston Penitentiary had about 400 inmates. (The prison opened in 1835, with a structure intended to reform the inmates, not merely to hold or punish them.) Industry included a steam grist mill, three foundries, two shipbuilders, ship repairers and five wagon makers; tradesmen of many types also worked here. All freight was shipped by boat or barges and ten steamboats per day were running to and from the town. Five schools for ladies and two for boys were operating, and the town had four bank agency offices. There were ten churches or chapels and the recently opened Hotel Dieu hospital was operated by sisters with the Religious Hospitallers of St. Joseph as a charity. Both Hotel Dieu and Kingston General Hospital (KGH) cared for victims of the typhus epidemic of 1847. The KGH site held the remains of 1,400 Irish immigrants who had died in Kingston in fever sheds along the waterfront, during the 1847 North American typhus epidemic, while fleeing the Great Famine. They were buried in a common grave. The remains were re-interred at the city's St. Mary's Cemetery in 1966. In 1995, KGH was designated a National Historic Site of Canada, because it is "the oldest public hospital in Canada still in operation with most of its buildings intact and thus effectively illustrates the evolution of health care in Canada in the 19th and 20th centuries". In 1848, the Kingston Gas Light Company began operation. (Gas lamps would be used until 1947.) By that time, the town was connected to the outside world by telegraph cables. The Grand Trunk Railway arrived in Kingston in 1856, providing service to Toronto in the west, and to Montreal in the east. Its Kingston station was 3.2 km (2 mi) north of downtown. Kingston became an important rail centre, for both passengers and cargo, due to difficulty travelling by ship through the rapids-and-shoal-filled river. By 1869, the population had increased to 15,000, and there were four banks. There were two ship building yards. Kingston was the home of Canada's first Prime Minister, John A. Macdonald. He won his first election to Kingston City Council in 1843, and would later represent the city for nearly 50 years at the national level, both before and after Confederation in 1867. One of his residences in Kingston, Bellevue House, is now a popular National Historic Site of Canada open to the public, and depicting the house as it would have been in the 1840s when he lived there. In the early hours of April 18, 1840, a dock fire, fanned by high winds, spread to a warehouse containing between 70 and 100 kegs of gunpowder. The resulting explosion spread the fire throughout the city's downtown area, destroying a large number of buildings, including the old city hall. To prevent similar incidents from occurring in future, the city began building with limestone or brick. This rebuilding phase was referred to as "the Limestone Revolution" and earned the city the nickname "The Limestone City". The Canadian Locomotive Company was in the early 20th century the largest locomotive works in the British Empire and the Davis Tannery was at one time the largest tannery in the British Empire. The tannery operated for a century and was closed in 1973. Other manufacturing companies included the Marine Railway Company, which built steamboats; the Victoria Iron Works, which produced iron in bars from scrap; several breweries; a distillery; and two soap and candle manufacturers. (By the start of the 21st century, most heavy industry would leave the city and their former sites would be gradually rehabilitated and redeveloped.) A telephone system began operation in Kingston in 1881; at that time the population was 14,091. Electricity was not available in Kingston until 1888. Kingston grew moderately through the 20th century through a series of annexations of lands in adjacent Kingston Township, including a 1952 annexation of some 5,500 acres (22 km2) which encompassed areas west to the Little Cataraqui Creek (including the village of Portsmouth), where a number of large residential subdivisions were built in the late 1950s and early '60s. Kingston's economy gradually evolved from an industrial to an institutional base after World War II. Queen's University grew from about 2,000 students in the 1940s to its present size of over 28,000 students, more than 90 per cent of whom are from outside the Kingston area. The Kingston campus of St. Lawrence College was established in 1969, and the college has 6,700 full-time students. The Royal Military College of Canada was founded in 1876, and has about 1,000 students. Kingston is a regional health care centre, anchored by Kingston General Hospital and the medical school at Queen's. The city's economy is also dominated by post-secondary education, military institutions, and prison installations. Municipal governance had been a topic of discussion since the mid-1970s due to financial imbalance between the city and the surrounding townships, which now had large residential areas and a population approaching that of the city proper. On January 1, 1998, the city was amalgamated with Kingston Township and Pittsburgh Township to form the new City of Kingston. The city's boundaries now encompass large rural areas north of Highway 401 and east of the Cataraqui River. The La Salle Causeway, includes a bascule lift bridge, that spans the Cataraqui River was demolished by the Government of Canada after an engineering error during refit work in 2024 which led to its collapse. The bridge has temporarily been replaced by a modular bridge. Military history Kingston, being strategically located at the head of the St. Lawrence River and at the mouth of the Cataraqui River near the border with the United States, has been a site of military importance since Fort Frontenac was built in 1673. The French and, later, the British established military garrisons. The War of 1812 led to the bolstering of military troops, the servicing of ships, and the building of new fortifications to defend the town and the Naval Dockyard. Forts were constructed on Point Henry and at Point Frederick. A picket wall, or stockade, incorporating five blockhouses was built to the west of the town, and batteries were constructed. In November 1812 American naval forces attacked the British sloop Royal George in Kingston harbour but the ship took refuge in the harbour and the American forces withdrew. Several defensive fortifications were constructed in the late 1840s because of tensions with the United States. These include Fort Henry, four Martello towers (Cathcart Tower, Shoal Tower, Murney Tower, and Fort Frederick), and the Market Battery. Military ships were built at the Naval Dockyard at Point Frederick from 1788 to 1853. The peninsula near the entrance of the later Royal Military College of Canada was the headquarters of the Royal Navy in between 1813 and 1853. (Fort Frederick, built in 1812–13, was also on this peninsula.) After the British army withdrew from most locations in Canada in 1870–71, two batteries of garrison artillery were formed by the Dominion Government; the "A" Battery was in Kingston at Fort Henry and Tête du Pont Barracks (Fort Frontenac). (The other battery was in Quebec City) The batteries were also schools of gunnery. Designated as the Regiment of Canadian Artillery, the regular component evolved into the Royal Canadian Horse Artillery. Most of its battery remained housed at Tête du Pont Barracks until 1939. Following the withdrawal of British forces from Canada in 1870–71, the federal government recognized the need for an officer training college in Canada. In 1874, during the administration of Alexander Mackenzie, enabling legislation was passed. Located on Point Frederick, the site of the former Royal Naval Dockyard, Before a formal college was established in 1876, there were proposals for military colleges in Canada. Staffed by British Regulars, students underwent a military course in 1865 at the School of Military Instruction in Kingston. The school enabled officers of militia or candidates for commission or promotion in the militia to learn military duties, drill and discipline, to command a company at Battalion Drill, to Drill a Company at Company Drill, the internal economy of a Company and the duties of a Company's Officer. The school was retained at Confederation, in 1867. The withdrawal of imperial troops required a Canadian location for the training of military officers. Because of Kingston's military tradition and the fact several military buildings already existed at the old naval dockyard, Point Frederick was chosen as the location for Canada's first military college, the Royal Military College of Canada (RMC). The facility, called simply The Military College until 1878, opened on Point Frederick with 18 students in 1876 under the first Commandant, Lieutenant-colonel Edward Osborne Hewett, of the Royal Engineers, providing cadets with academic and military training. In 1959, it became the first military college in the Commonwealth with the right to confer University degrees. Located east of Kingston's downtown, the army's Camp Barriefield, now McNaughton Barracks, was constructed at the beginning of the First World War and expanded during the Second World War. Camp Barriefield was named in honour of Rear-Admiral Robert Barrie (May 5, 1774 – June 7, 1841), a British naval officer noted for his service in the War of 1812. It was later named McNaughton Barracks after Andrew George Latta McNaughton, a former Minister of National Defence. Nearby Vimy Barracks was established in 1937 for the Royal Canadian Corps of Signals (later the Royal Canadian School of Signals). Vimy and McNaughton Barracks house the Canadian Forces School of Communications and Electronics (CFSCE), the Canadian Armed Forces' military communications training centre and several other units. McNaughton Barracks and Vimy Barracks make up most of CFB Kingston (Canadian Forces Base Kingston). Major military facilities supported by CFB Kingston include Fort Frontenac, on the site of the original fort, and the Royal Military College of Canada. The Princess of Wales' Own Regiment has been a fixture in the City of Kingston since 1863. The PWOR operates as a Primary Reserve Regiment, its members drawn from the Kingston and area community. During the First World War, the 21st Battalion was formed and saw action in France in 1915 resulting in 18 battle honours including their role in the Battle of Vimy Ridge. The Royal Canadian Horse Artillery also fought in Europe with the 2nd Canadian Division, taking part in 13 major battles. Fort Henry became an internment camp for enemy aliens from August 1914 to November 1917. During the Second World War the Stormont, Dundas and Glengarry Highlanders (SD&G), mobilized in June 1940. During fighting, troops that had formed in Kingston received recognition from the government for their achievements. Fort Henry was again an internment camp (Camp 31) from September 1939 to December 1943. A military aerodrome, RCAF Station Kingston, was constructed to the west of Kingston to support flying training as part of the British Commonwealth Air Training Plan. Heritage sites Kingston is known for its historic properties, as reflected in the city's motto of "where history and innovation thrive". Including World Heritage Sites, National Historic Sites, Provincially Significant sites, municipally designated heritage properties, and listed or non-designated heritage properties, the city has 1,211 properties listed in the heritage register it maintains pursuant to the Ontario Heritage Act. In 2007, the Rideau Canal, along with the fortifications at Kingston, was designated a World Heritage Site, one of only 15 such sites in Canada. There are 21 National Historic Sites of Canada in Kingston. Demographics In the 2021 census conducted by Statistics Canada, Kingston had a population of 132,485 living in 57,836 of its 63,095 total private dwellings, a change of 7% from its 2016 population of 123,798. With a land area of 451.58 km2 (174.36 sq mi), it had a population density of 293.4/km2 (759.9/sq mi) in 2021. At the census metropolitan area (CMA) level in the 2021 census, the Kingston CMA had a population of 172,546 living in 73,506 of its 80,955 total private dwellings, a change of 7.1% from its 2016 population of 161,175. With a land area of 1,919.17 km2 (741.00 sq mi), it had a population density of 89.9/km2 (232.9/sq mi) in 2021. In 2021, 82.4 per cent of Kingston residents were white / European, 13.4 per cent were visible minorities and 4.2 per cent were Indigenous. The largest visible minority groups were South Asian (3.4 per cent), Chinese (2.4 per cent), Black (2.0 per cent), Arab (1.2 per cent), and Latin American (1.0 per cent). European n.o.s North American Indigenous, n.o.s. 2021 Canadian census: English – 82.86%, French – 3.12%, Mandarin – 1.25%, Portuguese – 1.12%, Punjabi – 1.2%, Arabic – 0.94%. In 2021, 65,490 Kingston residents, or about half of the population, were members of Christian groups; the largest were Roman Catholics, who numbered 30,385 (23.5 per cent), the United Church of Canada (8,575 or 6.6 per cent), and the Anglican Church of Canada (8,600 or 6.7 per cent). The Presbyterian Church was particularly influential in the 19th-century development of Kingston post-secondary education. The church was a founder and financial supporter of Queen's University until 1912 when it was agreed the university should become a secular institution. John A. Macdonald was also a member of St. Andrew's Presbyterian Church in Kingston. The religious history of the city can still be seen in the monumental stone churches throughout the downtown core, some of which now serve as community and co-working spaces. Newer churches in the city, like Reunion Kingston, tend to seek rental options rather than build new physical spaces. Groups other than Christians and the non-religious include Muslims (3,375 or 2.6 per cent), Hindus (1,670 or 1.3 per cent), and Jews (875 or 0.7 per cent). 55,355 people, or 42.9 per cent of the population, identified as non-religious. Government For its municipal government, the city is divided into 12 wards; each elects one councillor. All voters in the city cast ballots for the mayor, currently Bryan Paterson, an economics professor at the Royal Military College of Canada. Paterson was re-elected in the 2022 Ontario municipal elections for the 2022–2026 term. The councillors elected for the same term were: A referendum held as part of the 2018 municipal on adoption of ranked voting received support from a majority of respondents (62.9%). While the municipality began work towards implement ranked voting for the 2022 election, city councils in Ontario were banned from using ranked choice ballots by the Supporting Ontario's Recovery and Municipal Elections Act, 2020. In provincial elections, the city consists of two ridings. Most of the city is Kingston and the Islands, formed after the 1999 redistribution, incorporating half of the former Frontenac-Addington and most of the former Kingston and the Islands riding. A small portion north of Highway 401 is in Lanark—Frontenac—Kingston which was created in 2015. Kingston is part of two federal ridings. Most of the city is in Kingston and the Islands, created in 1966 from Kingston and parts of Hastings—Frontenac—Lennox and Addington and Prince Edward—Lennox. A small portion north of Highway 401 is in Lanark—Frontenac, which was created by the 2012 Canadian federal electoral redistribution and was legally defined in the 2013 representation order. It came into effect upon the call of the 42nd Canadian federal election on October 19, 2015. Economy Kingston's economy relies heavily on public sector institutions and establishments. The most important sectors are related to health care, higher education (Queen's University, the Royal Military College of Canada, and St. Lawrence College), government (including the military and correctional services), tourism and culture. Manufacturing, and research and development play a smaller role than in the past. The private sector accounts for half of Kingston's employment. One of Kingston's major industrial employers of the 20th century, the Canadian Locomotive Company, closed in 1969, and the former Alcan and DuPont operations employ far fewer people than in the past. But due to the city's central location between Toronto, Ottawa, Montreal and Syracuse, NY a trucking and logistics warehousing industry has developed. According to the Kingston Economic Development Corporation, the major employers in Kingston as of October 2023 were: According to Statistics Canada, the tourism industry in Kingston represents a vital part of the city's economy. In 2004, over 3,500 jobs were contributed to Kingston's economy due to the tourism industry. The tourism industry has been at a healthy growth rate and has become one of the most performing sectors of Kingston. Unique opportunities are presented for this industry in this time of shifting travel trends and the baby boomer generation. The success of Kingston's tourism industry is heavily dependent on information about travellers; however, data availability still remains a challenge. Kingston has launched several tourism campaigns including Downtown Kingston! and Yellow Door. The city launched a campaign to attract more traffic to downtown Kingston. The campaign's mission statement promises, "to promote downtown Kingston as the vibrant and healthy commercial, retail, residential, and entertainment centre of our region, attracting more people to live, shop, work and gather". The downtown area of Kingston is known as the central business district, and is the gathering place for various events including the Kingston Buskers Rendezvous, FebFest, the 1000 Islands Poker Run and The Limestone City Blues Festival. Alternatively, Yellow Door promotes tourism to the entire city. The goal of the campaign is to increase the consumer's exposure to Kingston tourism, while remaining financially reasonable. A yellow door was used as a metaphor for Kingston – and the good times people have – and used street workers to gather potential tourists from nearby Toronto and Ottawa. "Yellow Door" promotes interest by offering potential tourists a trip to Kingston. In 2013, Yellow Door received the Tourism Advertising Award of Excellence for the marketing and promotion of an Ontario tourism product. Tripadvisor users rate the following among the best attractions in and near the city: Canada's Penitentiary Museum, Fort Henry (Fort Henry National Historic Site), Wolfe Island (via ferry), Bellevue House National Historic Site, City Hall and the downtown waterfront nearby. Ontario Travel's recommendations include cruising the Thousand Islands, The Grand Theatre and Slush Puppie Place. Coat of arms Transportation Highway 401 is the principal access route into Kingston and runs across the northern section of the urbanized portion of the city. The first sections of the highway in the Kingston area were opened in 1958, although it was not fully completed for another ten years. In addition to the 401, the Waaban Crossing and the La Salle Causeway are bridge crossings of the Cataraqui River. Highway 15 is an alternative route between Kingston and the Ottawa region. From the south, Interstate 81 connects with Highway 401 at the Thousand Islands Border Crossing east of Kingston. Regular ferry service, using the MV Wolfe Islander IV operated by the Ontario Ministry of Transportation. sails between downtown Kingston and Wolfe Island. Seasonal ferry service from Cape Vincent, New York, via Wolfe Island, into downtown Kingston is an alternate route to and from the United States. There are also tourist ferries departing downtown Kingston regularly, although with greater frequency in the summer months. Via Rail's Corridor service connects Kingston station along the main line between Windsor, Ontario and Quebec City, and to Ottawa. Its current station was built in 1974, relocated from the original station site 2 km (1.2 mi) further east. Kingston is a regular stop on train services operating between Toronto and Ottawa and between Toronto and Montreal. On June 30, 2020, Air Canada announced its intention to cease operations at Kingston Norman Rogers Airport. Air Canada said the timing of the suspensions and shutdowns will be governed by requirements for regulatory notice. In March 2022, Pascan Aviation started regular passenger service between Kingston and Montréal–Trudeau International Airport. However, Pascan Aviation has announced that they will be "pausing" their service from Kingston Airport starting in January 2023 for an undetermined amount of time, which means that the city will be without any passenger air service for the time being. Megabus (Coach Canada) provides frequent service from their Kingston Bus Terminal and Queen's University to a range of destinations in Ontario and Quebec. Passengers can book direct buses to Toronto's Union Station Bus Terminal, Toronto Pearson Airport, Toronto-Yorkdale, Montreal, Ottawa, Mississauga, Brockville, Cornwall, Kirkland, and Whitby. In 2021, Rider Express began to serve Kingston along its Toronto-Ottawa Route providing Kingston with direct bus service to Toronto, Ottawa, Scarborough, and Belleville. Passengers depart and arrive at the Rider Express's Kingston Bus Stop located at 1185 Division St. at Esso Gas Station by the McDonald's. In 2022, FlixBus began to serve Kingston along its Windsor-Ottawa Route. This provides passengers bus services from Kingston to Toronto, Ottawa, Hamilton, London, Windsor, Scarborough, Whitby–Oshawa, and Chatham-Kent. Passengers depart and arrive at Flixbus's Kingston Bus Stop located at 275 Wellington Street in downtown Kingston. In 2022, Red Arrow bus company included Kingston on a route between Toronto and Ottawa. Shuttle Kingston was reported in 2013 to connect to Watertown and Syracuse. Kingston Transit is the organization that handles the local public transportation system within Kingston. The organization runs over 20 bus routes throughout Kingston with additional routes being added on a seasonal basis to support the needs of the student population in Kingston. The organization charges a standard fare of $3.50 for riders over the age of 15 and provides free service to those under the age of 15. Kingston Access Services provides accessible municipal bus service to residents who cannot use Kingston Transit due to disability. In 2017, Kingston Access Services celebrated its 50th anniversary as Ontario's oldest accessible transit service having been established originally as the "Kingston Bus for the Handicapped" in 1967. Two taxi services operate in the city: Amey's Taxi and Modern City Taxi Cab Limited. Additionally, Uber also provides service to customers in the city and is licensed and regulated by Kingston Area Taxi Commission. The Uber cars that operate in Kingston are UberX, Uber Comfort, and Uber Green. In October 2022, Kingston ranked 4th on Uber's "Nightlife Index" due to the high volume of rides between 10pm and 2am within the city. Culture Kingston hosts several festivals during the year, including the Kingston WritersFest, Limestone City Blues Festival (ended 2023), the Kingston Canadian Film Festival, Artfest, Spring Reverb, the Kingston Buskers Rendezvous, Kingston Jazz Festival, the Reelout Queer Film Festival, Feb Fest, the Wolfe Island Music Festival, the Skeleton Park Arts Festival, Kingston Pride, Día de los Muertos Kingston Festival, and The Kick & Push Festival. Kingston is home to many artists who work in visual arts, media arts, literature, and a growing number who work in other time-based disciplines such as performance art. The contemporary arts scene in particular has two long standing professional non-profit venues in the downtown area, the Agnes Etherington Art Centre (founded 1957), and Modern Fuel Artist-Run Centre (founded 1977). Local artists often participate in the exhibition programming of each organization, while each also presents the work of artists from across Canada and around the world – in keeping with their educational mandates. Alternative venues for the presentation of exhibition programs in Kingston include the Union Gallery (Queen's University's student art gallery), Verb Gallery, Open Studio 22, the Kingston Arts Council gallery, The Artel: Arts Accommodations and Venue, and the Tett Centre for Creativity and Learning. The Kingston WritersFest occurred annually until 2024. In 2025, the festival was discontinued due to financial challenges. Circle of Wellness hosts Día de los Muertos Kingston Festival which occurs annually on the first Sunday of November. For over four decades the Ukrainian Canadian Club of Kingston has hosted the "Lviv, Ukraine" pavilion as part of the Folklore tradition, holding this popular cultural and folk festival annually on the second full weekend in June (at Regiopolis-Notre Dame High School). Literary events also happen throughout the year at the Kingston Frontenac Public Library and local bookstores. Writers who are or have been residents of Kingston include Steven Heighton, Bronwen Wallace, Helen Humphreys, Michael Ondaatje, Diane Schoemperlen, Michael Crummey, Mark Sinnett, Mary Alice Downie, Robertson Davies, Wayne Grady, Merilyn Simonds, Alec Ross, Jamie Swift and Carolyn Smart. Music and theatre venues include the Isabel Bader Centre for the Performing Arts, The Grand Theatre, and The Wellington Street Theatre, which host performances from international, national, and local groups like Domino Theatre, Theatre Kingston, The Vagabond Repertory Theatre Company, Hope Theatre Projects, Bottle Tree Productions, and other small groups dot the downtown area. The Kick & Push Festival was founded in 2015 to increase summer theatre programming downtown. The Kingston Symphony performs at The Grand Theatre, as do several amateur and semi-professional theatre groups. Slush Puppie Place (renamed from K-Rock Centre and Leon's Centre) a 5,800-seat entertainment venue and ice rink, opened in February 2008. The city has spawned several musicians and musical groups, most of whom are known mainly within Canada, but a few of whom have achieved international success. These include The Tragically Hip, Steppenwolf frontman John Kay, The Abrams, The Glorious Sons, The Mahones, jazz singer Andy Poole, Bedouin Soundclash, Sarah Harmer, The Arrogant Worms, The Headstones, The Inbreds, The Meringues, PS I Love You and members of Moist, including singer David Usher. The intersection between Princess Street and Division Street, in the downtown core, is known colloquially as Madeli Park and is a common meeting place for Queen's Students. Queen's students frequently go there to celebrate accomplishments, typically after exams. This tradition dates back to the early years of the university. Kingston is also the birthplace of Bryan Adams. The first winner of the television series Canadian Idol was Kingston native Ryan Malcolm. Poet Michael Andre was raised in Kingston. Zal Yanovsky of The Lovin' Spoonful lived in Kingston until his death in 2002. Comedian and actor Dan Aykroyd has a residence just north of Kingston and is a frequent face in town. He was briefly a minor partner in a restaurant called Aykroyd's Ghetto House Café on upper Princess Street during the 1990s which prominently featured a Blues Brothers' car projecting out from the second story wall. Education Kingston is the site of two universities, Queen's University and the Royal Military College of Canada, and a community college, St. Lawrence College. According to Statistics Canada, Kingston has the most PhD holders per capita of any city in Canada. Queen's University is one of Ontario's oldest universities and offers a variety of degree programs. The university was founded in 1841 under a royal charter from Queen Victoria. It has an enrolment of over 31,000 students. Queen's Main Campus is rather self-contained, but is within close walking distance of downtown Kingston, making it a pedestrian-friendly university for students and faculty alike. The Royal Military College of Canada, established in 1876, is Canada's only military university (Collège Militaire Royal in Saint-Jean-sur-Richelieu, Quebec, is a military college), providing academic and leadership training to officer cadets, other members of Canada's armed forces and civilians. There are 1,100 undergraduate students and 500 full- and part-time graduate students. St. Lawrence College offers baccalaureate degree programs at its Kingston campus in behavioural psychology, industrial trades, microelectronics, nursing, and business administration (the latter via a partnership with Laurentian University), in addition to certificate, diploma, and advanced diploma programs. The Limestone District School Board serves students in the City of Kingston and the counties of Frontenac and Lennox and Addington. Along with the Limestone School of Community Education, which provides adult education and training programs, approximately 21,000 students attend 70 elementary and secondary schools along with supporting education centres. The Algonquin and Lakeshore Catholic District School Board serves students of the Roman Catholic faith. Approximately 12,800 students attend 36 elementary schools and five secondary schools in this district. The Catholic high schools in the immediate Kingston area include Regiopolis Notre-Dame and Holy Cross Catholic High School. The francophone community is served by two school boards, the Conseil des écoles publiques de l'Est de l'Ontario and the Conseil des écoles catholiques du Centre-Est, each providing one secondary school in the area. Secondary schools in Kingston: Correctional institutions and facilities Kingston has the largest concentration of federal correctional facilities in Canada. The facilities are operated by the Correctional Service of Canada. Of the nine institutions in the Kingston area, seven are within the city's municipal boundaries. Until 2000, Canada's only federal correctional facility for women, the Prison for Women (nicknamed "P4W") was also in Kingston. As a result of the report of the Commission of Inquiry into Certain Events at the Prison for Women in Kingston, the facility was closed in 2000. Queen's University purchased the property with the intention of renovating it to house the Queen's Archives, but the interior of the building was awarded a heritage designation; Queen's found the required upgrades difficult to finance. In 2018, Queen's University sold the property to Siderius Developments, which in 2024 divided it into blocks to begin residential development. In September 2013, after almost 180 years of housing prisoners, Kingston Penitentiary closed. The maximum security prison was named a National Historic Site of Canada in February 1990 due to its history and reputation. In its early years, the prison had a vital role in constructing the city. The prison brought prosperity to Kingston, and along with eight other prisons being built in the area, helped create an impressive local economy. Geography and climate Kingston is within the Mixedwood Plains Ecozone, and is dominated in the Kingston area by a mixture of deciduous and coniferous tree species and abundant water resources. The region is underlain mostly by Ordovician limestone of the Black River Group. Being within hardiness zone 5, Kingston has a moderate humid continental climate (Köppen climate classification Dfb). It has cooler summers and colder winters than most of Southern Ontario. Although proximity to Lake Ontario has a moderating effect on the climate, it also tempers the heat and can on occasion increase precipitation, especially during heavy snowfall events. Mild to strong breezes blowing off Lake Ontario make Kingston one of the most consistently windy cities in Canada, especially near the water. As a result of the moderation the all-time high is a relatively modest 35.6 °C (96.1 °F) recorded on July 9, 1936. However, due to the humidity, the humidex values on such hot days is normally very high. The coldest temperature ever recorded in Kingston was −35.6 °C (−32.1 °F) on February 17, 1896. The central part of the city is between the Cataraqui River to the east and the Little Cataraqui Creek to the west, with outlying areas extending in both directions. The eastern part of the city is accessible by the La Salle Causeway on Highway 2. Major features of Kingston's waterfront include Flora MacDonald Confederation Basin, Portsmouth Olympic Harbour, Collins Bay, Wolfe Island, Garden Island, the Cataraqui River (including the Inner Harbour and, within that, Anglin Bay). Sports Kingston lays claim to being the birthplace of ice hockey, though this is contested. Support for this is found in a journal entry of a British Army officer in Kingston in 1843. He wrote "Began to skate this year, improved quickly and had great fun at hockey on the ice." Kingston is also home to the oldest continuing hockey rivalry in the world by virtue of a game played in 1886 on the frozen Kingston harbour between Queen's University and the Royal Military College of Canada. To mark this event, the city hosts an annual game between the two institutions, played on a cleared patch of frozen lake with both teams wearing period-correct uniforms and using rules from that era. The two schools also contest the annual Carr-Harris Cup, named for Lorne Carr-Harris, under modern competitive conditions to commemorate and continue their rivalry. The Memorial Cup, which serves as the annual championship event for the Canadian Hockey League, began in 1919 on the initiative of Kingstonian James T. Sutherland. The first championship was held in Kingston. Sutherland, a member of the Hockey Hall of Fame, also helped establish the annual exhibition game between the Royal Military College of Canada and the United States Military Academy (West Point) in 1923. Kingston is represented in the Ontario Hockey League (OHL) by the Kingston Frontenacs. Kingston had a team in the Ontario Junior Hockey League (OJHL), the Kingston Voyageurs but ceased after the 2018–19 season. The Original Hockey Hall of Fame, formerly the International Hockey Hall of Fame, was established in September 1943 with a building constructed in 1965. The original building was near the Kingston Memorial Centre (which was opened in 1950), but has since been relocated to Kingston's west end at the Invista Centre. The International Hockey Hall of Fame, founded by the National Hockey League (NHL) and the Canadian Amateur Hockey Association, is the oldest sports hall of fame in Canada. The museum's collection is home to various items that pay homage to Kingston's role in the history of hockey in Canada. These include: the original square hockey puck from the first Queens University vs. the Royal Military College of Canada (RMC) game in 1886, hockey's oldest sweater worn by a Queen's student in 1894, and Canada's first Olympic gold medal from 1924, among others. Slush Puppie Place, in the downtown core, opened in February 2008, and serves as home ice for the Frontenacs. The Voyageurs played at the Invista Centre in the city's west end. The arena is now home to the Kingston Wranglers of the United States Premier Hockey League. The city is known for its fresh-water sailing, and hosted the sailing events for the 1976 Summer Olympics. CORK – Canadian Olympic-training Regatta, Kingston – now hosted by CORK/Sail Kingston is still held every August. Since 1972, Kingston has hosted more than 40 World and Olympic sailing championships. Kingston is listed by a panel of experts among the best yacht racing venues in the US, even though Kingston is in Canada. Kingston sits amid excellent cruising and boating territory, with easy access to Lake Ontario, the St. Lawrence River, and the Thousand Islands, including the Thousand Islands National Park, formerly the St. Lawrence Islands National Park. Kingston is also home to the youth sail training ship called the St. Lawrence II. During the summers, the RMC campus in Kingston plays host to a Royal Canadian Sea Cadets camp called HMCS Ontario, which provides sail training along with much other training to youth from across Canada. The Kingston Yacht Club in downtown Kingston has a learn to sail program for both children and adults. Kingston is known for fresh-water wreck diving. Kingston's shipwrecks are well preserved by its cool, fresh water, and the recent zebra mussel invasion has caused a dramatic improvement in water clarity that has enhanced the quality of diving in the area. The Kingston Lawn Bowling Club has been at its location on Napier Street since 1932, although the sport's beginnings in Kingston have been traced back to 1914. While the club offers a variety of recreational opportunities, a number of its members have gone on to compete successfully at the provincial level and beyond. Most notable of these was Dick Edney, who was inducted into the Kingston and District Sports Hall of Fame in 2005. The Kingston area has eight golf courses, two of which are entirely public. The Kingston Golf Club, established in 1884, was a founding member of the Royal Canadian Golf Association in 1895; however, this club ceased operating in the mid-1920s. The first winner of the Canadian Amateur Championship that same year was Kingstonian Thomas Harley, a Scottish immigrant carpenter. Richard H. (Dick) Green, who immigrated to the area from England in the late 1920s, was the longtime club professional for nearly 40 years at Cataraqui Golf and Country Club (founded in 1917 and redesigned by Stanley Thompson in 1930). Green also helped design several courses in eastern Ontario, including Smiths Falls (1949), Glen Lawrence (1955), Rideau Lakes (1961), Amherstview (1971), Garrison (1971), Evergreen (1972), Belle Park Fairways (1975), Rivendell (1979), and Colonnade (1984). Matt McQuillan, a professional player on the PGA Tour for the 2011 and 2012 seasons, was born and raised in Kingston, and developed his game at the Garrison Golf and Curling Club. McQuillan won the 2005 Telus Edmonton Open on the PGA Tour Canada. Three curling clubs are in the Kingston area: the Cataraqui Golf & Country Club, Garrison Golf & Curling Club, and the Royal Kingston Curling Club. The Royal Kingston Curling Club (RKCC) was founded in 1820, and was granted Royal patronage in 1993. In 2006, the RKCC moved to a new facility at 130 Days Road, to make way for the construction of a new complex at Queen's University, the Queen's Centre. Kingston has a history of hosting major curling competitions. In 2020, Kingston hosted the Tim Hortons Brier, the national men's curling championship. Kingston previously hosted the Brier in 1957. In 2013, Kingston hosted the Scotties Tournament of Hearts, the national women's curling championship. The Kingston Panthers Rugby Football Club (KPRFC) was founded in 1959, and from that moment onward has established a reputation as a strong community player. KPRFC is a non-profit organization answering directly to the Eastern Ontario Rugby Union (EORU), the Rugby Ontario (ORU), and Rugby Canada (RC). The Kingston Panthers R.F.C., recently celebrated their fortieth anniversary with an EORU championship in the Division 1 championship game at Twin Elm Rugby Park in Ottawa, Ontario. The earliest known incarnation of an organized football team in Kingston is the Kingston Granites which played in the predecessor league to the Canadian Football League, the Ontario Rugby Football Union, (ORFU). The team played for four seasons between 1898 and 1901 winning 1 ORFU title in 1899 defeating the Ottawa Rough Riders 8–0. Kingston also hosted the 10th Grey Cup on December 2, 1922. The Limestone Grenadiers now represent Kingston and the surrounding area in the Ontario Varsity Football League. The Club franchise catchment area draws players from Frontenac, Hastings, Lanark, Leeds, Lennox and Prince Edward counties. League play runs from late May through August. The Junior and Varsity teams' main schedule pits the Grenadiers against eastern Ontario opponents and cross-over games with western Ontario teams leading to a provincial title championship game. The Kingston Volleyball Club (KVC) was founded in 2015. It is a non-profit organization, a member of the Ontario Volleyball Association (OVA), Volleyball Canada (VC). The club relies on fundraising in order to operate. Kingston had a soccer presence in 2011 when Kingston FC represented the city in the Canadian Soccer League's second division. In 2012, the club was promoted to the league's first division and competed in the league until 2015. Their greatest success occurred in 2013, when the club won the divisional title and finished as runners-up in the playoff championship final. After Kingston left the CSL, the city was represented in League1 Ontario by Kingston Clippers with both a men's and women's side. The Clippers played their final season in the league in 2016. In 2025, it was announced that the Kingston Sentinels would begin play in League2 Ontario, the province's third division, in the 2026 season, representing the return of semi-professional soccer to the city. Law enforcement Notable people Media See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/FP_(programming_language)] | [TOKENS: 716] |
Contents FP (programming language) FP (for functional programming) is a programming language created by John Backus to support the function-level programming paradigm. It allows building programs from a set of generally useful language primitives and avoiding named variables (a style also called tacit programming or "point free"). It was heavily influenced by APL developed by Kenneth E. Iverson in the early 1960s. The FP language was introduced in Backus's 1977 Turing Award paper, "Can programming be liberated from the von Neumann style?: a functional style and its algebra of programs". The paper sparked interest in functional programming research, eventually leading to modern functional languages, which are largely founded on the lambda calculus paradigm, and not the function-level paradigm Backus had hoped. In his Turing award paper, Backus described how the FP style is different: An FP system is based on the use of a fixed set of combining forms called functional forms. These, plus simple definitions, are the only means of building new functions from existing ones; they use no variables or substitutions rules, and they become the operations of an associated algebra of programs. All the functions of an FP system are of one type: they map objects onto objects and always take a single argument. FP was used little beyond academia. In the 1980s, Backus created a successor language, FL, as an internal project at IBM Research. Overview The values that FP programs map into one another comprise a set which is closed under sequence formation: These values can be built from any set of atoms: booleans, integers, reals, characters, etc.: ⊥ is the undefined value, or bottom. Sequences are bottom-preserving: FP programs are functions f that each map a single value x into another: Functions are either primitive (i.e., provided with the FP environment) or are built from the primitives by program-forming operations (also called functionals). An example of primitive function is constant, which transforms a value x into the constant-valued function x̄. Functions are strict: Another example of a primitive function is the selector function family, denoted by 1,2,... where: Functionals In contrast to primitive functions, functionals operate on other functions. For example, some functions have a unit value, such as 0 for addition and 1 for multiplication. The functional unit produces such a value when applied to a function f that has one: These are the core functionals of FP: Equational functions In addition to being constructed from primitives by functionals, a function may be defined recursively by an equation, the simplest kind being: where Ef is an expression built from primitives, other defined functions, and the function symbol f alone, using functionals. FP84 FP84 is an extension of FP to include infinite sequences, programmer-defined combining forms (analogous to those that Backus added to FL, his successor to FP), and lazy evaluation. Unlike FFP, another one of Backus' own variations on FP, FP84 makes a clear distinction between objects and functions: i.e., the latter are no longer represented by sequences of the former. FP84's extensions are enabled by removing the FP restriction that sequence construction be applied to only non-⊥ objects: in FP84 the entire universe of expressions (including those which meaning is ⊥) is closed under sequence construction. FP84's semantics are embodied in an underlying algebra of programs, a set of function-level equalities that may be used to manipulate and reason about programs. References External links |
======================================== |
[SOURCE: https://techcrunch.com/2026/02/17/cohere-launches-a-family-of-open-multilingual-models/] | [TOKENS: 889] |
Save up to $680 on your pass with Super Early Bird rates. REGISTER NOW. Save up to $680 on your Disrupt 2026 pass. Ends February 27. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us Cohere launches a family of open multilingual models Enterprise AI company Cohere launched a new family of multilingual models on the sidelines of the ongoing India AI Summit. The models, dubbed Tiny Aya, are open-weight — meaning their underlying code is publicly available for anyone to use and modify — support over 70 languages, and can run on everyday devices like laptops without requiring an internet connection. The model, launched by the company’s research arm Cohere Labs, supports South Asian languages such as Bengali, Hindi, Punjabi, Urdu, Gujarati, Tamil, Telugu, and Marathi. The base model contains 3.35 billion parameters — a measure of its size and complexity. Cohere has also launched TinyAya-Global, a version fine-tuned to better follow user commands, for apps that require broad language support. Regional variants round out the family: TinyAya-Earth for African languages; TinyAya-Fire for South Asian languages; and TinyAya-Water for Asia Pacific, West Asia, and Europe. “This approach allows each model to develop stronger linguistic grounding and cultural nuance, creating systems that feel more natural and reliable for the communities they are meant to serve. At the same time, all Tiny Aya models retain broad multilingual coverage, making them flexible starting points for further adaptation and research,” the company said in a statement. Cohere noted that these models, which were trained on a single cluster of 64 H100 GPUs (a type of high-powered chip by Nvidia) using relatively modest computing sources, are ideal for researchers and developers building apps for audiences that speak native languages. The models are capable of running directly on devices, so developers can use them to power offline translation. The company noted that it built its underlying software to suit on-device usage, requiring less computing power than most comparable models. In linguistically diverse countries like India, this kind of offline-friendly capability can open up a diverse set of applications and use cases without the need for constant internet access. The models are available on HuggingFace, the popular platform for sharing and testing AI models, and the Cohere Platform. Developers can download them on HuggingFace, Kaggle, and Ollama for local deployment. The company is also releasing training and evaluation datasets on HuggingFace and plans to release a technical report detailing its training methodology. The startup’s CEO, Aidan Gomez, said last year that the company plans to go public “soon.” According to CNBC, the company ended 2025 on a high note, posting $240 million in annual recurring revenue, with 50% growth quarter-over-quarter throughout the year. Topics Ivan covers global consumer tech developments at TechCrunch. He is based out of India and has previously worked at publications including Huffington Post and The Next Web. You can contact or verify outreach from Ivan by emailing im@ivanmehta.com or via encrypted message at ivan.42 on Signal. Save up to $680 on your pass before February 27.Meet investors. Discover your next portfolio company. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building what’s next. Don’t miss these one-time savings. Most Popular FBI says ATM ‘jackpotting’ attacks are on the rise, and netting hackers millions in stolen cash Meta’s own research found parental supervision doesn’t really help curb teens’ compulsive social media use How Ricursive Intelligence raised $335M at a $4B valuation in 4 months After all the hype, some AI experts don’t think OpenClaw is all that exciting OpenClaw creator Peter Steinberger joins OpenAI Hollywood isn’t happy about the new Seedance 2.0 video generator The great computer science exodus (and where students are going instead) © 2025 TechCrunch Media LLC. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Computer#cite_note-26] | [TOKENS: 10628] |
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y − z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languages—some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/William_McClure_Thomson] | [TOKENS: 647] |
Contents William McClure Thomson William McClure Thomson (31 December 1806 – 8 April 1894) was an American Protestant missionary who worked in Ottoman Syria. After spending 25 years in Syria, he published a bestselling book – The Land and the Book – that described his experiences and observations during his travels. He used his knowledge of the region to illustrate and explain passages from the Bible, giving readers a new perspective on the scriptures. Career Thomson was the son of a Presbyterian minister. He was a graduate of Miami University, Ohio. When he arrived in Beirut on February 24, 1833, he was only the eighth American Protestant missionary to arrive in the region. Two of his predecessors had died, and two had been recalled. In April 1834, Thomson was in Jaffa when a revolt broke out, and he was unable to return to Jerusalem until Ibrahim Pasha recaptured the city with 12,000 troops. While he was away, his wife had given birth to a son, but she died just 12 days after he returned. After his wife's death, Thomson relocated to Beirut with his young son. There, in 1835, with Rev. Story Hebard, he established a boarding school for boys. In August 1840, Thomson and other American missionaries were evacuated from Beirut by the USS Cyane, and witnessed the bombardment of the city by a coalition of British, Austrian, and Turkish naval forces under the command of Charles Napier. The bombardment, which lasted for one month, forced Pasha's army to retreat. Meanwhile, a conflict broke out between the Druze and Maronite communities in Lebanon. In 1843, Thomson and Cornelius Van Alen Van Dyck founded a boys seminary in Abeih, Lebanon. Two years later, in 1845, a new outbreak of violence occurred, and Thomson once again played a role in negotiating a truce. His local nickname became Abu Tangera—father of the cooking pot—after his broad-rimmed hat. With his local knowledge, he was used as a dragoman by several Biblical scholars. In 1852, he accompanied one of the founders of modern Biblical archeology, Edward Robinson on his second tour of the Holy Land. He remained in Sidon until 1857, when he returned to America for two years. His magnus opus, The Land and the Book, was first published in 1859, and became one of the bestselling Travelogues of Palestine. In 1860 full scale civil war broke out in Lebanon. The conflict lasted 60 days and spread to Damascus. Thomson supervised the distribution of £30,000 of money, food and clothing amongst the thousands of destitute refugees. At a Beirut Mission Meeting on 23 January 1862, he proposed the establishment of a college with Daniel Bliss as its president. The Syrian Protestant College was established in 1866 with 16 students. This college evolved into the American University of Beirut. Theophilus Waldmeier's autobiography states that it was on Thomson's advice, in 1873, that Waldmeier established Brummana High School. References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Python_(programming_language)#cite_note-36] | [TOKENS: 4314] |
Contents Python (programming language) Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically type-checked and garbage-collected. It supports multiple programming paradigms, including structured (particularly procedural), object-oriented and functional programming. Guido van Rossum began working on Python in the late 1980s as a successor to the ABC programming language. Python 3.0, released in 2008, was a major revision and not completely backward-compatible with earlier versions. Beginning with Python 3.5, capabilities and keywords for typing were added to the language, allowing optional static typing. As of 2026[update], the Python Software Foundation supports Python 3.10, 3.11, 3.12, 3.13, and 3.14, following the project's annual release cycle and five-year support policy. Python 3.15 is currently in the alpha development phase, and the stable release is expected to come out in October 2026. Earlier versions in the 3.x series have reached end-of-life and no longer receive security updates. Python has gained widespread use in the machine learning community. It is widely taught as an introductory programming language. Since 2003, Python has consistently ranked in the top ten of the most popular programming languages in the TIOBE Programming Community Index, which ranks based on searches in 24 platforms. History Python was conceived in the late 1980s by Guido van Rossum at Centrum Wiskunde & Informatica (CWI) in the Netherlands. It was designed as a successor to the ABC programming language, which was inspired by SETL, capable of exception handling and interfacing with the Amoeba operating system. Python implementation began in December 1989. Van Rossum first released it in 1991 as Python 0.9.0. Van Rossum assumed sole responsibility for the project, as the lead developer, until 12 July 2018, when he announced his "permanent vacation" from responsibilities as Python's "benevolent dictator for life" (BDFL); this title was bestowed on him by the Python community to reflect his long-term commitment as the project's chief decision-maker. (He has since come out of retirement and is self-titled "BDFL-emeritus".) In January 2019, active Python core developers elected a five-member Steering Council to lead the project. The name Python derives from the British comedy series Monty Python's Flying Circus. (See § Naming.) Python 2.0 was released on 16 October 2000, featuring many new features such as list comprehensions, cycle-detecting garbage collection, reference counting, and Unicode support. Python 2.7's end-of-life was initially set for 2015, and then postponed to 2020 out of concern that a large body of existing code could not easily be forward-ported to Python 3. It no longer receives security patches or updates. While Python 2.7 and older versions are officially unsupported, a different unofficial Python implementation, PyPy, continues to support Python 2, i.e., "2.7.18+" (plus 3.11), with the plus signifying (at least some) "backported security updates". Python 3.0 was released on 3 December 2008, and was a major revision and not completely backward-compatible with earlier versions, with some new semantics and changed syntax. Python 2.7.18, released in 2020, was the last release of Python 2. Several releases in the Python 3.x series have added new syntax to the language, and made a few (considered very minor) backward-incompatible changes. As of January 2026[update], Python 3.14.3 is the latest stable release. All older 3.x versions had a security update down to Python 3.9.24 then again with 3.9.25, the final version in 3.9 series. Python 3.10 is, since November 2025, the oldest supported branch. Python 3.15 has an alpha released, and Android has an official downloadable executable available for Python 3.14. Releases receive two years of full support followed by three years of security support. Design philosophy and features Python is a multi-paradigm programming language. Object-oriented programming and structured programming are fully supported, and many of their features support functional programming and aspect-oriented programming – including metaprogramming and metaobjects. Many other paradigms are supported via extensions, including design by contract and logic programming. Python is often referred to as a 'glue language' because it is purposely designed to be able to integrate components written in other languages. Python uses dynamic typing and a combination of reference counting and a cycle-detecting garbage collector for memory management. It uses dynamic name resolution (late binding), which binds method and variable names during program execution. Python's design offers some support for functional programming in the "Lisp tradition". It has filter, map, and reduce functions; list comprehensions, dictionaries, sets, and generator expressions. The standard library has two modules (itertools and functools) that implement functional tools borrowed from Haskell and Standard ML. Python's core philosophy is summarized in the Zen of Python (PEP 20) written by Tim Peters, which includes aphorisms such as these: However, Python has received criticism for violating these principles and adding unnecessary language bloat. Responses to these criticisms note that the Zen of Python is a guideline rather than a rule. The addition of some new features had been controversial: Guido van Rossum resigned as Benevolent Dictator for Life after conflict about adding the assignment expression operator in Python 3.8. Nevertheless, rather than building all functionality into its core, Python was designed to be highly extensible via modules. This compact modularity has made it particularly popular as a means of adding programmable interfaces to existing applications. Van Rossum's vision of a small core language with a large standard library and easily extensible interpreter stemmed from his frustrations with ABC, which represented the opposite approach. Python claims to strive for a simpler, less-cluttered syntax and grammar, while giving developers a choice in their coding methodology. Python lacks do .. while loops, which Rossum considered harmful. In contrast to Perl's motto "there is more than one way to do it", Python advocates an approach where "there should be one – and preferably only one – obvious way to do it". In practice, however, Python provides many ways to achieve a given goal. There are at least three ways to format a string literal, with no certainty as to which one a programmer should use. Alex Martelli is a Fellow at the Python Software Foundation and Python book author; he wrote that "To describe something as 'clever' is not considered a compliment in the Python culture." Python's developers typically prioritize readability over performance. For example, they reject patches to non-critical parts of the CPython reference implementation that would offer increases in speed that do not justify the cost of clarity and readability.[failed verification] Execution speed can be improved by moving speed-critical functions to extension modules written in languages such as C, or by using a just-in-time compiler like PyPy. Also, it is possible to transpile to other languages. However, this approach either fails to achieve the expected speed-up, since Python is a very dynamic language, or only a restricted subset of Python is compiled (with potential minor semantic changes). Python is meant to be a fun language to use. This goal is reflected in the name – a tribute to the British comedy group Monty Python – and in playful approaches to some tutorials and reference materials. For instance, some code examples use the terms "spam" and "eggs" (in reference to a Monty Python sketch), rather than the typical terms "foo" and "bar". A common neologism in the Python community is pythonic, which has a broad range of meanings related to program style: Pythonic code may use Python idioms well; be natural or show fluency in the language; or conform with Python's minimalist philosophy and emphasis on readability. Syntax and semantics Python is meant to be an easily readable language. Its formatting is visually uncluttered and often uses English keywords where other languages use punctuation. Unlike many other languages, it does not use curly brackets to delimit blocks, and semicolons after statements are allowed but rarely used. It has fewer syntactic exceptions and special cases than C or Pascal. Python uses whitespace indentation, rather than curly brackets or keywords, to delimit blocks. An increase in indentation comes after certain statements; a decrease in indentation signifies the end of the current block. Thus, the program's visual structure accurately represents its semantic structure. This feature is sometimes termed the off-side rule. Some other languages use indentation this way; but in most, indentation has no semantic meaning. The recommended indent size is four spaces. Python's statements include the following: The assignment statement (=) binds a name as a reference to a separate, dynamically allocated object. Variables may subsequently be rebound at any time to any object. In Python, a variable name is a generic reference holder without a fixed data type; however, it always refers to some object with a type. This is called dynamic typing—in contrast to statically-typed languages, where each variable may contain only a value of a certain type. Python does not support tail call optimization or first-class continuations; according to Van Rossum, the language never will. However, better support for coroutine-like functionality is provided by extending Python's generators. Before 2.5, generators were lazy iterators; data was passed unidirectionally out of the generator. From Python 2.5 on, it is possible to pass data back into a generator function; and from version 3.3, data can be passed through multiple stack levels. Python's expressions include the following: In Python, a distinction between expressions and statements is rigidly enforced, in contrast to languages such as Common Lisp, Scheme, or Ruby. This distinction leads to duplicating some functionality, for example: A statement cannot be part of an expression; because of this restriction, expressions such as list and dict comprehensions (and lambda expressions) cannot contain statements. As a particular case, an assignment statement such as a = 1 cannot be part of the conditional expression of a conditional statement. Python uses duck typing, and it has typed objects but untyped variable names. Type constraints are not checked at definition time; rather, operations on an object may fail at usage time, indicating that the object is not of an appropriate type. Despite being dynamically typed, Python is strongly typed, forbidding operations that are poorly defined (e.g., adding a number and a string) rather than quietly attempting to interpret them. Python allows programmers to define their own types using classes, most often for object-oriented programming. New instances of classes are constructed by calling the class, for example, SpamClass() or EggsClass()); the classes are instances of the metaclass type (which is an instance of itself), thereby allowing metaprogramming and reflection. Before version 3.0, Python had two kinds of classes, both using the same syntax: old-style and new-style. Current Python versions support the semantics of only the new style. Python supports optional type annotations. These annotations are not enforced by the language, but may be used by external tools such as mypy to catch errors. Python includes a module typing including several type names for type annotations. Also, mypy supports a Python compiler called mypyc, which leverages type annotations for optimization. 1.33333 frozenset() Python includes conventional symbols for arithmetic operators (+, -, *, /), the floor-division operator //, and the modulo operator %. (With the modulo operator, a remainder can be negative, e.g., 4 % -3 == -2.) Also, Python offers the ** symbol for exponentiation, e.g. 5**3 == 125 and 9**0.5 == 3.0. Also, it offers the matrix‑multiplication operator @ . These operators work as in traditional mathematics; with the same precedence rules, the infix operators + and - can also be unary, to represent positive and negative numbers respectively. Division between integers produces floating-point results. The behavior of division has changed significantly over time: In Python terms, the / operator represents true division (or simply division), while the // operator represents floor division. Before version 3.0, the / operator represents classic division. Rounding towards negative infinity, though a different method than in most languages, adds consistency to Python. For instance, this rounding implies that the equation (a + b)//b == a//b + 1 is always true. Also, the rounding implies that the equation b*(a//b) + a%b == a is valid for both positive and negative values of a. As expected, the result of a%b lies in the half-open interval [0, b), where b is a positive integer; however, maintaining the validity of the equation requires that the result must lie in the interval (b, 0] when b is negative. Python provides a round function for rounding a float to the nearest integer. For tie-breaking, Python 3 uses the round to even method: round(1.5) and round(2.5) both produce 2. Python versions before 3 used the round-away-from-zero method: round(0.5) is 1.0, and round(-0.5) is −1.0. Python allows Boolean expressions that contain multiple equality relations to be consistent with general usage in mathematics. For example, the expression a < b < c tests whether a is less than b and b is less than c. C-derived languages interpret this expression differently: in C, the expression would first evaluate a < b, resulting in 0 or 1, and that result would then be compared with c. Python uses arbitrary-precision arithmetic for all integer operations. The Decimal type/class in the decimal module provides decimal floating-point numbers to a pre-defined arbitrary precision with several rounding modes. The Fraction class in the fractions module provides arbitrary precision for rational numbers. Due to Python's extensive mathematics library and the third-party library NumPy, the language is frequently used for scientific scripting in tasks such as numerical data processing and manipulation. Functions are created in Python by using the def keyword. A function is defined similarly to how it is called, by first providing the function name and then the required parameters. Here is an example of a function that prints its inputs: To assign a default value to a function parameter in case no actual value is provided at run time, variable-definition syntax can be used inside the function header. Code examples "Hello, World!" program: Program to calculate the factorial of a non-negative integer: Libraries Python's large standard library is commonly cited as one of its greatest strengths. For Internet-facing applications, many standard formats and protocols such as MIME and HTTP are supported. The language includes modules for creating graphical user interfaces, connecting to relational databases, generating pseudorandom numbers, arithmetic with arbitrary-precision decimals, manipulating regular expressions, and unit testing. Some parts of the standard library are covered by specifications—for example, the Web Server Gateway Interface (WSGI) implementation wsgiref follows PEP 333—but most parts are specified by their code, internal documentation, and test suites. However, because most of the standard library is cross-platform Python code, only a few modules must be altered or rewritten for variant implementations. As of 13 March 2025,[update] the Python Package Index (PyPI), the official repository for third-party Python software, contains over 614,339 packages. Development environments Most[which?] Python implementations (including CPython) include a read–eval–print loop (REPL); this permits the environment to function as a command line interpreter, with which users enter statements sequentially and receive results immediately. Also, CPython is bundled with an integrated development environment (IDE) called IDLE, which is oriented toward beginners.[citation needed] Other shells, including IDLE and IPython, add additional capabilities such as improved auto-completion, session-state retention, and syntax highlighting. Standard desktop IDEs include PyCharm, Spyder, and Visual Studio Code; there are web browser-based IDEs, such as the following environments: Implementations CPython is the reference implementation of Python. This implementation is written in C, meeting the C11 standard since version 3.11. Older versions use the C89 standard with several select C99 features, but third-party extensions are not limited to older C versions—e.g., they can be implemented using C11 or C++. CPython compiles Python programs into an intermediate bytecode, which is then executed by a virtual machine. CPython is distributed with a large standard library written in a combination of C and native Python. CPython is available for many platforms, including Windows and most modern Unix-like systems, including macOS (and Apple M1 Macs, since Python 3.9.1, using an experimental installer). Starting with Python 3.9, the Python installer intentionally fails to install on Windows 7 and 8; Windows XP was supported until Python 3.5, with unofficial support for VMS. Platform portability was one of Python's earliest priorities. During development of Python 1 and 2, even OS/2 and Solaris were supported; since that time, support has been dropped for many platforms. All current Python versions (since 3.7) support only operating systems that feature multithreading, by now supporting not nearly as many operating systems (dropping many outdated) than in the past. All alternative implementations have at least slightly different semantics. For example, an alternative may include unordered dictionaries, in contrast to other current Python versions. As another example in the larger Python ecosystem, PyPy does not support the full C Python API. Creating an executable with Python often is done by bundling an entire Python interpreter into the executable, which causes binary sizes to be massive for small programs, yet there exist implementations that are capable of truly compiling Python. Alternative implementations include the following: Stackless Python is a significant fork of CPython that implements microthreads. This implementation uses the call stack differently, thus allowing massively concurrent programs. PyPy also offers a stackless version. Just-in-time Python compilers have been developed, but are now unsupported: There are several compilers/transpilers to high-level object languages; the source language is unrestricted Python, a subset of Python, or a language similar to Python: There are also specialized compilers: Some older projects existed, as well as compilers not designed for use with Python 3.x and related syntax: A performance comparison among various Python implementations, using a non-numerical (combinatorial) workload, was presented at EuroSciPy '13. In addition, Python's performance relative to other programming languages is benchmarked by The Computer Language Benchmarks Game. There are several approaches to optimizing Python performance, despite the inherent slowness of an interpreted language. These approaches include the following strategies or tools: Language Development Python's development is conducted mostly through the Python Enhancement Proposal (PEP) process; this process is the primary mechanism for proposing major new features, collecting community input on issues, and documenting Python design decisions. Python coding style is covered in PEP 8. Outstanding PEPs are reviewed and commented on by the Python community and the steering council. Enhancement of the language corresponds with development of the CPython reference implementation. The mailing list python-dev is the primary forum for the language's development. Specific issues were originally discussed in the Roundup bug tracker hosted by the foundation. In 2022, all issues and discussions were migrated to GitHub. Development originally took place on a self-hosted source-code repository running Mercurial, until Python moved to GitHub in January 2017. CPython's public releases have three types, distinguished by which part of the version number is incremented: Many alpha, beta, and release-candidates are also released as previews and for testing before final releases. Although there is a rough schedule for releases, they are often delayed if the code is not ready yet. Python's development team monitors the state of the code by running a large unit test suite during development. The major academic conference on Python is PyCon. Also, there are special Python mentoring programs, such as PyLadies. Naming Python's name is inspired by the British comedy group Monty Python, whom Python creator Guido van Rossum enjoyed while developing the language. Monty Python references appear frequently in Python code and culture; for example, the metasyntactic variables often used in Python literature are spam and eggs, rather than the traditional foo and bar. Also, the official Python documentation contains various references to Monty Python routines. Python users are sometimes referred to as "Pythonistas". Languages influenced by Python See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/British_Government] | [TOKENS: 2717] |
Contents Government of the United Kingdom King Charles III William, Prince of Wales Charles III(King-in-Council) Starmer ministry (L) Keir Starmer (L) David Lammy (L) (King-in-Parliament) Charles III The Lord Reed of Allermuir Lord Sales Andrew Bailey Monetary Policy Committee The UK Government, formally His Majesty's Government[a], is the central executive authority of the United Kingdom of Great Britain and Northern Ireland. The government is led by the prime minister (Sir Keir Starmer since 5 July 2024) who advises the monarch on the appointment of all the other ministers. The government is currently supported by the Labour party, which has had a majority in the House of Commons since 2024. The prime minister and his most senior ministers belong to the supreme decision-making committee, known as the Cabinet. Ministers of the Crown are responsible to the House in which they sit; they make statements in that House and take questions from members of that House. For most senior ministers this is usually the elected House of Commons rather than the House of Lords. The government is dependent on Parliament to make primary legislation, and general elections are held at least once every five years to elect a new House of Commons, unless the prime minister advises the monarch to dissolve Parliament, in which case an election may be held sooner. After an election, the monarch selects as prime minister the leader of the party most likely to command the confidence of the House of Commons, usually by possessing a majority of MPs. Under the uncodified British constitution, executive authority lies with the sovereign, although this authority is exercised only after receiving the advice of the Privy Council. In most cases the cabinet exercise power directly as leaders of the government departments, though some Cabinet positions are sinecures to a greater or lesser degree (for instance Chancellor of the Duchy of Lancaster or Lord Privy Seal). The government is sometimes referred to by the metonym "Westminster" or "Whitehall", as many of its offices are situated there. These metonyms are used especially by the devolved executives, the Scottish Government, Welsh Government and Northern Ireland Executive, as well as English strategic authority mayoralties to differentiate themselves from His Majesty's Government (HMG). History The United Kingdom is a constitutional monarchy in which the reigning monarch (that is, the king or queen who is the head of state at any given time) does not make any open political decisions. All political decisions are taken by the government and Parliament. This constitutional state of affairs is the result of a long history of constraining and reducing the political power of the monarch, beginning with Magna Carta in 1215. Since the start of Edward VII's reign in 1901, by convention, the prime minister has been an elected member of Parliament (MP) and thus answerable to the House of Commons, although there were two weeks in 1963 when Alec Douglas-Home was first a member of the House of Lords and then of neither house. A similar convention applies to the position of chancellor of the exchequer. The last chancellor of the exchequer to be a member of the House of Lords was Lord Denman, who served for one month in 1834. Powers The British monarch is the head of state and the sovereign, but not the head of government. In practice, the monarch conventionally takes little direct part in governing the country and remains neutral in political affairs. However, the authority of the state that is vested in the sovereign, known as the Crown, remains the source of executive power exercised by the government. In addition to explicit statutory authority, the Crown also possesses a body of powers in certain matters collectively known as the royal prerogative. These powers range from the authority to issue or withdraw passports to declarations of war. By long-standing convention, most of these powers are delegated from the sovereign to various ministers or other officers of the Crown, who may use them without having to obtain the consent of Parliament. The prime minister also has weekly meetings with the monarch. What is said in these meetings is strictly private; however, they generally involve government and political matters which the monarch has a "right and a duty" to comment on. Such comments are non-binding however and the King must ultimately abide by decisions of the government. Royal prerogative powers include, but are not limited to, the following: While no formal documents set out the prerogatives, the government published the above list in October 2003 to increase transparency, as some of the powers exercised in the name of the monarch are part of the royal prerogative. However, the complete extent of the royal prerogative powers has never been fully set out, as many of them originated in ancient custom and the period of absolute monarchy, or were modified by later constitutional practice. Ministers and departments As of 2019[update], there are around 120 government ministers supported by 560,000 civil servants and other staff working in the 24 ministerial departments and their executive agencies. There are also an additional 20 non-ministerial departments with a range of further responsibilities. In theory, a government minister does not have to be a member of either House of Parliament. In practice, however, the convention is that ministers must be members of either the House of Commons or the House of Lords to be accountable to Parliament. From time to time, prime ministers appoint non-parliamentarians as ministers. In recent years such ministers have been appointed to the House of Lords. Government in Parliament The government is required by convention and for practical reasons to maintain the confidence of the House of Commons. It requires the support of the House of Commons for the maintenance of supply (by voting through the government's budgets) and to pass primary legislation. By convention, if a government loses the confidence of the House of Commons it must either resign or a general election is held. The support of the lords, while useful to the government in getting its legislation passed without delay, is not vital. A government is not required to resign even if it loses the confidence of the lords and is defeated in key votes in that House. The House of Commons is thus the responsible house. The prime minister is held to account during Prime Minister's Questions (PMQs) which provides an opportunity for MPs from all parties to question the PM on any subject. There are also departmental questions when ministers answer questions relating to their specific departmental brief. Unlike PMQs, both the cabinet ministers for the department and junior ministers within the department may answer on behalf of the government, depending on the topic of the question. During debates on legislation proposed by the government, ministers—usually with departmental responsibility for the bill—will lead the debate for the government and respond to points made by MPs or Lords. Committees of both the House of Commons and House of Lords hold the government to account, scrutinise its work and examine in detail proposals for legislation. Ministers appear before committees to give evidence and answer questions. Government ministers are also required by convention and the Ministerial Code, when Parliament is sitting, to make major statements regarding government policy or issues of national importance to Parliament. This allows MPs or Lords to question the government on the statement. When the government instead chooses to make announcements first outside Parliament, it is often the subject of significant criticism from MPs and the speaker of the House of Commons. Location The prime minister is based at 10 Downing Street in Westminster, London. Cabinet meetings also take place here. Most government departments have their headquarters nearby in Whitehall. Limits of government power The government's powers include general executive and statutory powers, delegated legislation, and numerous powers of appointment and patronage. However, some powerful officials and bodies, (e.g. HM judges, local authorities, and the charity commissions) are legally more or less independent of the government, and government powers are legally limited to those retained by the Crown under common law or granted and limited by act of Parliament. Both substantive and procedural limitations are enforceable in the courts by judicial review. Nevertheless, magistrates and mayors can still be arrested and put on trial for corruption, and the government has powers to insert commissioners into a local authority to oversee its work, and to issue directives that must be obeyed by the local authority if the local authority is not abiding by its statutory obligations. By contrast, as in European Union (EU) member states, EU officials cannot be prosecuted for any actions carried out in pursuit of their official duties, and foreign country diplomats (though not their employees) and foreign members of the European Parliament are immune from prosecution in EU states under any circumstance. As a consequence, neither EU bodies nor diplomats have to pay taxes, since it would not be possible to prosecute them for tax evasion. When the UK was a member of the EU, this caused a dispute when the US ambassador to the UK claimed that London's congestion charge was a tax, and not a charge (despite the name), and therefore he did not have to pay it—a claim the Greater London Authority disputed. Similarly, the monarch is immune from criminal prosecution and may only be sued with his permission (this is known as sovereign immunity). The sovereign, by law, is not required to pay income tax, but Queen Elizabeth II voluntarily paid it from 1993 until the end of her reign in 2022, and also paid local rates voluntarily. However, the monarchy also received a substantial grant from the government, the Sovereign Support Grant, and Queen Elizabeth II's inheritance from her mother, Queen Elizabeth The Queen Mother, was exempt from inheritance tax. In addition to legislative powers, His Majesty's Government has substantial influence over local authorities and other bodies set up by it, through financial powers and grants. Many functions carried out by local authorities, such as paying out housing benefits and council tax benefits, are funded or substantially part-funded by the central government. Neither the central government nor local authorities are permitted to sue anyone for defamation. Individual politicians are allowed to sue people for defamation in a personal capacity and without using government funds, but this is relatively rare (although George Galloway, who was a backbench MP for a quarter of a century, has sued or threatened to sue for defamation several times). However, it is a criminal offence to make a false statement about any election candidate during an election, to reduce the number of votes they receive (as with libel, opinions do not count). Terminology While the government is the current group of ministers (the British Government frontbench), the government is also sometimes seen more broadly as including people or organisations that work for the ministers. The civil service, while 'independent of government', is sometimes described as being part of the government, due to the closeness of its working with ministers, in advising them, supporting them, and implementing their executive decisions. Some individuals who work for ministers even have the word 'Government' in their titles, such as the Government Actuary and the Government Chief Scientific Adviser, as do civil service organisations such as the Government Statistical Service, the Government Legal Profession, and the Government Office for Science. Companies owned by the government can also be seen as parts of the government, such as UK Government Investments and HS2 Ltd. Similarly, Parliamentary Private Secretaries are not ministers and so not part of the government. However, they are bound by parts of the ministerial code, are part of the payroll vote, and can be seen as being on the 'first rung of the ministerial ladder'. They are sometimes described as being part of the government. Symbols The UK Government uses a simplified form of the Royal Arms as a logo called the lesser arms. It typically omits the helm and mantling, reduces the crest to the crown alone, and has no compartment. Although the blazon of the arms has not changed since 1837, a new depiction of the Royal Arms is created for each new reign. Use of the Royal Arms by government departments and agencies is governed by the Cabinet Office. The Royal Arms feature on all Acts of Parliament, in the logos of government departments, on the cover of all UK passports (and passports issued in other British territories and dependencies), as an inescutcheon on the diplomatic flags of British Ambassadors, and on The London Gazette. It is also used in the British Overseas Territories, namely on all acts of the Anguilla House of Assembly and by the administrations of Akrotiri and Dhekelia, the Pitcairn Islands, and South Georgia and the South Sandwich Islands. Some departments use a different symbol as their logo for historic reasons, including the Scotland Office, Home Office, Ministry of Defence and Department for Business and Trade. Devolved governments Since 1999, certain areas of central government have been devolved to accountable governments in Scotland, Wales and Northern Ireland. These are not part of His Majesty's Government, and are directly accountable to their institutions, with their authority under the Crown; in contrast, there is no devolved national government for England, although certain powers of central government are devolved to the Greater London Authority and combined authorities. Local government Up to three layers of elected local authorities (such as county, district and parish Councils) exist throughout all parts of the United Kingdom, in some places merged into unitary authorities. They have limited local tax-raising powers. Many other authorities and agencies also have statutory powers, generally subject to some central government supervision. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Minecraft#cite_ref-258] | [TOKENS: 12858] |
Contents Minecraft Minecraft is a sandbox game developed and published by Mojang Studios. Following its initial public alpha release in 2009, it was formally released in 2011 for personal computers. The game has since been ported to numerous platforms, including mobile devices and various video game consoles. In Minecraft, players explore a procedurally generated world with virtually infinite terrain made up of voxels (cubes). They can discover and extract raw materials, craft tools and items, build structures, fight hostile mobs, and cooperate with or compete against other players in multiplayer. The game's large community offers a wide variety of user-generated content, such as modifications, servers, player skins, texture packs, and custom maps, which add new game mechanics and possibilities. Originally created by Markus "Notch" Persson using the Java programming language, Jens "Jeb" Bergensten was handed control over the game's development following its full release. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for US$2.5 billion; Xbox Game Studios hold the publishing rights for the Bedrock Edition, the unified cross-platform version which evolved from the Pocket Edition codebase[i] and replaced the legacy console versions. Bedrock is updated concurrently with Mojang's original Java Edition, although with numerous, generally small, differences. Minecraft is the best-selling video game in history with over 350 million copies sold. It has received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions have played prominent roles in popularizing it. The wider Minecraft franchise includes several spin-off games, such as Minecraft: Story Mode, Minecraft Dungeons, and Minecraft Legends. A film adaptation, titled A Minecraft Movie, was released in 2025 and became the second highest-grossing video game film of all time. Gameplay Minecraft is a 3D sandbox video game that has no required goals to accomplish, giving players a large amount of freedom in choosing how to play the game. The game features an optional achievement system. Gameplay is in the first-person perspective by default, but players have the option of third-person perspectives. The game world is composed of rough 3D objects—mainly cubes, referred to as blocks—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a voxel grid, while players can move freely around the world. Players can break, or mine, blocks and then place them elsewhere, enabling them to build things. Very few blocks are affected by gravity, instead maintaining their voxel position in the air. Players can also craft a wide variety of items, such as armor, which mitigates damage from attacks; weapons (such as swords or bows and arrows), which allow monsters and animals to be killed more easily; and tools (such as pickaxes or shovels), which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. They may also freely craft helpful blocks—such as furnaces which can cook food and smelt ores, and torches that produce light—or exchange items with villagers (NPC) through trading emeralds for different goods and vice versa. The game has an inventory system, allowing players to carry a limited number of items. The in-game time system follows a day and night cycle, with one full cycle lasting for 20 real-time minutes. The game also contains a material called redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems. New players are given a randomly selected default character skin out of nine possibilities, including Steve or Alex, but are able to create and upload their own skins. Players encounter various mobs (short for mobile entities) including animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, spawn during the daytime and can be hunted for food and crafting materials, while hostile mobs—including large spiders, witches, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies and skeletons, burn under the sun if they have no headgear and are not standing in water. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk and drowned variants that spawn in deserts and oceans, respectively. The Minecraft environment is procedurally generated as players explore it using a map seed that is randomly chosen at the time of world creation (or manually specified by the player). Divided into biomes representing different environments with unique resources and structures, worlds are designed to be effectively infinite in traditional gameplay, though technical limits on the player have existed throughout development, both intentionally and not. Implementation of horizontally infinite generation initially resulted in a glitch termed the "Far Lands" at over 12 million blocks away from the world center, where terrain generated as wall-like, fissured patterns. The Far Lands and associated glitches were considered the effective edge of the world until they were resolved, with the current horizontal limit instead being a special impassable barrier called the world border, located 30 million blocks away. Vertical space is comparatively limited, with an unbreakable bedrock layer at the bottom and a building limit several hundred blocks into the sky. Minecraft features three independent dimensions accessible through portals and providing alternate game environments. The Overworld is the starting dimension and represents the real world, with a terrestrial surface setting including plains, mountains, forests, oceans, caves, and small sources of lava. The Nether is a hell-like underworld dimension accessed via an obsidian portal and composed mainly of lava. Mobs that populate the Nether include shrieking, fireball-shooting ghasts, alongside anthropomorphic pigs called piglins and their zombified counterparts. Piglins in particular have a bartering system, where players can give them gold ingots and receive items in return. Structures known as Nether Fortresses generate in the Nether, containing mobs such as wither skeletons and blazes, which can drop blaze rods needed to access the End dimension. The player can also choose to build an optional boss mob known as the Wither, using skulls obtained from wither skeletons and soul sand. The End can be reached through an end portal, consisting of twelve end portal frames. End portals are found in underground structures in the Overworld known as strongholds. To find strongholds, players must craft eyes of ender using an ender pearl and blaze powder. Eyes of ender can then be thrown, traveling in the direction of the stronghold. Once the player reaches the stronghold, they can place eyes of ender into each portal frame to activate the end portal. The dimension consists of islands floating in a dark, bottomless void. A boss enemy called the Ender Dragon guards the largest, central island. Killing the dragon opens access to an exit portal, which, when entered, cues the game's ending credits and the End Poem, a roughly 1,500-word work written by Irish novelist Julian Gough, which takes about nine minutes to scroll past, is the game's only narrative text, and the only text of significant length directed at the player.: 10–12 At the conclusion of the credits, the player is teleported back to their respawn point and may continue the game indefinitely. In Survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter in order to survive at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game unless the player is playing on peaceful difficulty. If the hunger bar is empty, the player starves. Health replenishes when players have a full hunger bar or continuously on peaceful. Upon losing all health, players die. The items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game and can be changed by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they despawn after 5 minutes. Players may acquire experience points (commonly referred to as "xp" or "exp") by killing mobs and other players, mining, smelting ores, animal breeding, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects. The game features two more game modes based on Survival, known as Hardcore mode and Adventure mode. Hardcore mode plays identically to Survival mode, but with the game's difficulty setting locked to "Hard" and with permadeath, forcing them to delete the world or explore it as a spectator after dying. Adventure mode was added to the game in a post-launch update, and prevents the player from directly modifying the game's world. It was designed primarily for use in custom maps, allowing map designers to let players experience it as intended. In Creative mode, players have access to an infinite number of all resources and items in the game through the inventory menu and can place or mine them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters usually do not take any damage nor are affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance. Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, local area network (LAN) play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own server by making a realm, using a host provider, hosting one themselves or connect directly to another player's game via Xbox Live, PlayStation Network or Nintendo Switch Online. Single-player worlds have LAN support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use server addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3,000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Bedrock Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018. The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, items, and mobs to entire arrays of mechanisms. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as mini-maps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) that often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new achievements, dimensions, functions, loot tables, predicates, recipes, structures, tags, and world generation. The Xbox 360 Edition supported downloadable content, which was available to purchase via the Xbox Games Store; these content packs usually contained additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combined texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition did not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released exclusively for the Wii U Edition worldwide on 17 May 2016, and later bundled free with the Nintendo Switch Edition at launch. Another based on Fallout was released on consoles that December, and for Windows and Mobile in April 2017. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would run only when the image containing the skin itself was opened. In June 2017, Mojang released the "1.1 Discovery Update" to the Pocket Edition of the game, which later became the Bedrock Edition. The update introduced the "Marketplace", a catalogue of purchasable user-generated content intended to give Minecraft creators "another way to make a living from the game". Various skins, maps, texture packs and add-ons from different creators can be bought with "Minecoins", a digital currency that is purchased with real money. Additionally, users can access specific content with a subscription service titled "Marketplace Pass". Alongside content from independent creators, the Marketplace also houses items published by Mojang and Microsoft themselves, as well as official collaborations between Minecraft and other intellectual properties. By 2022, the Marketplace had over 1.7 billion content downloads, generating over $500 million in revenue. Development Before creating Minecraft, Markus "Notch" Persson was a game developer at King, where he worked until March 2009. At King, he primarily developed browser games and learned several programming languages. During his free time, he prototyped his own games, often drawing inspiration from other titles, and was an active participant on the TIGSource forums for independent developers. One such project was "RubyDung", a base-building game inspired by Dwarf Fortress, but with an isometric, three-dimensional perspective similar to RollerCoaster Tycoon. Among the features in RubyDung that he explored was a first-person view similar to Dungeon Keeper, though he ultimately discarded this idea, feeling the graphics were too pixelated at the time. Around March 2009, Persson left King and joined jAlbum, while continuing to work on his prototypes. Infiniminer, a block-based open-ended mining game first released in April 2009, inspired Persson's vision for RubyDung's future direction. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements. The first public alpha build of Minecraft was released on 17 May 2009 on TIGSource. Over the years, Persson regularly released test builds that added new features, including tools, mobs, and entire new dimensions. In 2011, partly due to the game's rising popularity, Persson decided to release a full 1.0 version—a second part of the "Adventure Update"—on 18 November 2011. Shortly after, Persson stepped down from development, handing the project's lead to Jens "Jeb" Bergensten. On 15 September 2014, Microsoft, the developer behind the Microsoft Windows operating system and Xbox video game console, announced a $2.5 billion acquisition of Mojang, which included the Minecraft intellectual property. Persson had suggested the deal on Twitter, asking a corporation to buy his stake in the game after receiving criticism for enforcing terms in the game's end-user license agreement (EULA), which had been in place for the past three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014 and led to Persson becoming one of Forbes' "World's Billionaires". After 2014, Minecraft's primary versions received usually annual major updates—free to players who have purchased the game— each primarily centered around a specific theme. For instance, version 1.13, the Update Aquatic, focused on ocean-related features, while version 1.16, the Nether Update, introduced significant changes to the Nether dimension. However, in late 2024, Mojang announced a shift in their update strategy; rather than releasing large updates annually, they opted for a more frequent release schedule with smaller, incremental updates, stating, "We know that you want new Minecraft content more often." The Bedrock Edition has also received regular updates, now matching the themes of the Java Edition updates. Other versions of the game, such as various console editions and the Pocket Edition, were either merged into Bedrock or discontinued and have not received further updates. On 7 May 2019, coinciding with Minecraft's 10th anniversary, a JavaScript recreation of an old 2009 Java Edition build named Minecraft Classic was made available to play online for free. On 16 April 2020, a Bedrock Edition-exclusive beta version of Minecraft, called Minecraft RTX, was released by Nvidia. It introduced physically-based rendering, real-time path tracing, and DLSS for RTX-enabled GPUs. The public release was made available on 8 December 2020. Path tracing can only be enabled in supported worlds, which can be downloaded for free via the in-game Minecraft Marketplace, with a texture pack from Nvidia's website, or with compatible third-party texture packs. It cannot be enabled by default with any texture pack on any world. Initially, Minecraft RTX was affected by many bugs, display errors, and instability issues. On 22 March 2025, a new visual mode called Vibrant Visuals, an optional graphical overhaul similar to Minecraft RTX, was announced. It promises modern rendering features—such as dynamic shadows, screen space reflections, volumetric fog, and bloom—without the need of RTX-capable hardware. Vibrant Visuals was released as a part of the Chase the Skies update on 17 June 2025 for Bedrock Edition and is planned to release on Java Edition at a later date. Development began for the original edition of Minecraft—then known as Cave Game, and now known as the Java Edition—in May 2009,[k] and ended on 13 May, when Persson released a test video on YouTube of an early version of the game, dubbed the "Cave game tech test" or the "Cave game tech demo". The game was named Minecraft: Order of the Stone the next day, after a suggestion made by a player. "Order of the Stone" came from the webcomic The Order of the Stick, and "Minecraft" was chosen "because it's a good name". The title was later shortened to just Minecraft, omitting the subtitle. Persson completed the game's base programming over a weekend in May 2009, and private testing began on TigIRC on 16 May. The first public release followed on 17 May 2009 as a developmental version shared on the TIGSource forums. Based on feedback from forum users, Persson continued updating the game. This initial public build later became known as Classic. Further developmental phases—dubbed Survival Test, Indev, and Infdev—were released throughout 2009 and 2010. The first major update, known as Alpha, was released on 30 June 2010. At the time, Persson was still working a day job at jAlbum but later resigned to focus on Minecraft full-time as sales of the alpha version surged. Updates were distributed automatically, introducing new blocks, items, mobs, and changes to game mechanics such as water flow. With revenue generated from the game, Persson founded Mojang, a video game studio, alongside former colleagues Jakob Porser and Carl Manneh. On 11 December 2010, Persson announced that Minecraft would enter its beta phase on 20 December. He assured players that bug fixes and all pre-release updates would remain free. As development progressed, Mojang expanded, hiring additional employees to work on the project. The game officially exited beta and launched in full on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced the hiring of the developers behind Bukkit, a popular developer API for Minecraft servers, to improve Minecraft's support of server modifications. This move included Mojang taking apparent ownership of the CraftBukkit server mod, though this apparent acquisition later became controversial, and its legitimacy was questioned due to CraftBukkit's open-source nature and licensing under the GNU General Public License and Lesser General Public License. In August 2011, Minecraft: Pocket Edition was released as an early alpha for the Xperia Play via the Android Market, later expanding to other Android devices on 8 October 2011. The iOS version followed on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. Unlike Java Edition, Pocket Edition initially focused on Minecraft's creative building and basic survival elements but lacked many features of the PC version. Bergensten confirmed on Twitter that the Pocket Edition was written in C++ rather than Java, as iOS does not support Java. On 10 December 2014, a port of Pocket Edition was released for Windows Phone 8.1. In July 2015, a port of the Pocket Edition to Windows 10 was released as the Windows 10 Edition, with full crossplay to other Pocket versions. In January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 20 September 2017, with the "Better Together Update", the Pocket Edition was ported to the Xbox One, and was renamed to the Bedrock Edition. The console versions of Minecraft debuted with the Xbox 360 edition, developed by 4J Studios and released on 9 May 2012. Announced as part of the Xbox Live Arcade NEXT promotion, this version introduced a redesigned crafting system, a new control interface, in-game tutorials, split-screen multiplayer, and online play via Xbox Live. Unlike the PC version, its worlds were finite, bordered by invisible walls. Initially, the Xbox 360 version resembled outdated PC versions but received updates to bring it closer to Java Edition before eventually being discontinued. The Xbox One version launched on 5 September 2014, featuring larger worlds and support for more players. Minecraft expanded to PlayStation platforms with PlayStation 3 and PlayStation 4 editions released on 17 December 2013 and 4 September 2014, respectively. Originally planned as a PS4 launch title, it was delayed before its eventual release. A PlayStation Vita version followed in October 2014. Like the Xbox versions, the PlayStation editions were developed by 4J Studios. Nintendo platforms received Minecraft: Wii U Edition on 17 December 2015, with a physical release in North America on 17 June 2016 and in Europe on 30 June. The Nintendo Switch version launched via the eShop on 11 May 2017. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition, based on the Pocket Edition, would be available for download immediately after the livestream, and a physical copy available on a later date. The game is compatible only with the New Nintendo 3DS or New Nintendo 2DS XL systems and does not work with the original 3DS or 2DS systems. On 20 September 2017, the Better Together Update introduced Bedrock Edition across Xbox One, Windows 10, VR, and mobile platforms, enabling cross-play between these versions. Bedrock Edition later expanded to Nintendo Switch and PlayStation 4, with the latter receiving the update in December 2019, allowing cross-platform play for users with a free Xbox Live account. The Bedrock Edition released a native version for PlayStation 5 on 22 October 2024, while the Xbox Series X/S version launched on 17 June 2025. On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as "Legacy Console Editions". On 15 January 2019, the New Nintendo 3DS version of Minecraft received its final update, effectively becoming discontinued as well. An educational version of Minecraft, designed for use in schools, launched on 1 November 2016. It is available on Android, ChromeOS, iPadOS, iOS, MacOS, and Windows. On 20 August 2018, Mojang announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that it would be operated by JD.com in China. On 26 June 2020, a public beta for the Education Edition was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020. On 20 May 2016, China Edition (also known as My World) was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile versions are based on the Bedrock Edition. The edition is free-to-play and had over 700 million registered accounts by September 2023. This version of Bedrock Edition is exclusive to Microsoft's Windows 10 and Windows 11 operating systems. The beta release for Windows 10 launched on the Windows Store on 29 July 2015. After nearly a year and a half in beta, Microsoft fully released the version on 19 December 2016. Called the "Ender Update", this release implemented new features to this version of Minecraft like world templates and add-on packs. On 7 June 2022, the Java and Bedrock Editions of Minecraft were merged into a single bundle for purchase on Windows; those who owned one version would automatically gain access to the other version. Both game versions would otherwise remain separate. Around 2011, prior to Minecraft's full release, Mojang collaborated with The Lego Group to create a Lego brick-based Minecraft game called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1×1 block to account for larger pieces typically used in Lego sets. Persson worked on an early version called "Project Rex Kwon Do", named after the character of the same name from the film Napoleon Dynamite. Although Lego approved the project and Mojang assigned two developers for six months, it was canceled due to the Lego Group's demands, according to Mojang's Daniel Kaplan. Lego considered buying Mojang to complete the game, but when Microsoft offered over $2 billion for the company, Lego stepped back, unsure of Minecraft's potential. On 26 June 2025, a build of Brickcraft dated 28 June 2012 was published on a community archive website Omniarchive. Initially, Markus Persson planned to support the Oculus Rift with a Minecraft port. However, after Facebook acquired Oculus in 2013, he abruptly canceled the plans, stating, "Facebook creeps me out." In 2016, a community-made mod, Minecraft VR, added VR support for Java Edition, followed by Vivecraft for HTC Vive. Later that year, Microsoft introduced official Oculus Rift support for Windows 10 Edition, leading to the discontinuation of the Minecraft VR mod due to trademark complaints. Vivecraft was endorsed by Minecraft VR contributors for its Rift support. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 Bedrock version would receive PlayStation VR support later that month. In September 2024, the Minecraft team announced they would no longer support PlayStation VR, which received its final update in March 2025. Music and sound design Minecraft's music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. To create the sound effects for the game, Rosenfeld made extensive use of Foley techniques. On learning the processes for the game, he remarked, "Foley's an interesting thing, and I had to learn its subtleties. Early on, I wasn't that knowledgeable about it. It's a whole trial-and-error process. You just make a sound and eventually you go, 'Oh my God, that's it! Get the microphone!' There's no set way of doing anything at all." He reminisced on creating the in-game sound for grass blocks, stating "It turns out that to make grass sounds you don't actually walk on grass and record it, because grass sounds like nothing. What you want to do is get a VHS, break it apart, and just lightly touch the tape." According to Rosenfeld, his favorite sound to design for the game was the hisses of spiders. He elaborates, "I like the spiders. Recording that was a whole day of me researching what a spider sounds like. Turns out, there are spiders that make little screeching sounds, so I think I got this recording of a fire hose, put it in a sampler, and just pitched it around until it sounded like a weird spider was talking to you." Many of the sound design decisions by Rosenfeld were done accidentally or spontaneously. The creeper notably lacks any specific noises apart from a loud fuse-like sound when about to explode; Rosenfeld later recalled "That was just a complete accident by Markus and me [sic]. We just put in a placeholder sound of burning a matchstick. It seemed to work hilariously well, so we kept it." On other sounds, such as those of the zombie, Rosenfeld remarked, "I actually never wanted the zombies so scary. I intentionally made them sound comical. It's nice to hear that they work so well [...]." Rosenfeld remarked that the sound engine was "terrible" to work with, remembering "If you had two song files at once, it [the game engine] would actually crash. There were so many more weird glitches like that the guys never really fixed because they were too busy with the actual game and not the sound engine." The background music in Minecraft consists of instrumental ambient music. To compose the music of Minecraft, Rosenfeld used the package from Ableton Live, along with several additional plug-ins. Speaking on them, Rosenfeld said "They can be pretty much everything from an effect to an entire orchestra. Additionally, I've got some synthesizers that are attached to the computer. Like a Moog Voyager, Dave Smith Prophet 08 and a Virus TI." On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which included the music that was added in a 2013 "Music Update" for the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. On 14 August 2020, Ghostly released Volume Beta on CD and vinyl, with alternate color LPs and lenticular cover pressings released in limited quantities. The final update Rosenfeld worked on was 2018's 1.13 Update Aquatic. His music remained the only music in the game until 2020's "Nether Update", introducing pieces from Lena Raine. Since then, other composers have made contributions, including Kumi Tanioka, Samuel Åberg, Aaron Cherof, and Amos Roddy, with Raine remaining as the new primary composer. Ownership of all music besides Rosenfeld's independently released albums has been retained by Microsoft, with their label publishing all of the other artists' releases. Gareth Coker also composed some of the music for the game's mini games from the Legacy Console editions. Rosenfeld had stated his intent to create a third album of music for the game in a 2015 interview with Fact, and confirmed its existence in a 2017 tweet, stating that his work on the record as of then had tallied up to be longer than the previous two albums combined, which in total clocks in at over 3 hours and 18 minutes. However, due to licensing issues with Microsoft, the third volume has since not seen release. On 8 January 2021, Rosenfeld was asked in an interview with Anthony Fantano whether or not there was still a third volume of his music intended for release. Rosenfeld responded, saying, "I have something—I consider it finished—but things have become complicated, especially as Minecraft is now a big property, so I don't know." Reception Minecraft has received critical acclaim, with praise for the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have expressed enjoyment in Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste". A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock Paper Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly. The Xbox One Edition was one of the best received ports, being praised for its relatively large worlds. The PlayStation 3 Edition also received generally favorable reviews, being compared to the Xbox 360 Edition and praised for its well-adapted controls. The PlayStation 4 edition was the best received port to date, being praised for having 36 times larger worlds than the PlayStation 3 edition and described as nearly identical to the Xbox One edition. The PlayStation Vita Edition received generally positive reviews from critics but was noted for its technical limitations. The Wii U version received generally positive reviews from critics but was noted for a lack of GamePad integration. The 3DS version received mixed reviews, being criticized for its high price, technical issues, and lack of cross-platform play. The Nintendo Switch Edition received fairly positive reviews from critics, being praised, like other modern ports, for its relatively larger worlds. Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content. Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. As of 10 October 2014[update], the game had sold 17 million copies on PC, becoming the best-selling PC game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 700 million registered accounts by September 2023. By 2023, the game had sold over 300 million copies. As of April 2025, Minecraft has sold over 350 million copies. The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. As of 4 April 2014[update], the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 Edition sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. As of 2022, the Vita version has sold over 1.65 million physical copies in Japan, making it the best-selling Vita game in the country. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter. The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million. In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013, it was nominated as the family game of the year at the British Academy Video Games Awards. During the 16th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated the Xbox 360 version of Minecraft for "Strategy/Simulation Game of the Year". Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer's The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list. Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards. Minecraft also won "Stream Game of the Year" at inaugural Streamer Awards in 2021. The game later garnered a Nickelodeon Kids' Choice Award nomination for Favorite Video Game in 2021, and won the same category in 2022 and 2023. At the Golden Joystick Awards 2025, it won the Still Playing Award - PC and Console. Minecraft has been subject to several notable controversies. In June 2014, Mojang announced that it would begin enforcing the portion of Minecraft's end-user license agreement (EULA) which prohibits servers from giving in-game advantages to players in exchange for donations or payments. Spokesperson Owen Hill stated that servers could still require players to pay a fee to access the server and could sell in-game cosmetic items. The change was supported by Persson, citing emails he received from parents of children who had spent hundreds of dollars on servers. The Minecraft community and server owners protested, arguing that the EULA's terms were more broad than Mojang was claiming, that the crackdown would force smaller servers to shut down for financial reasons, and that Mojang was suppressing competition for its own Minecraft Realms subscription service. The controversy contributed to Notch's decision to sell Mojang. In 2020, Mojang announced an eventual change to the Java Edition to require a login from a Microsoft account rather than a Mojang account, the latter of which would be sunsetted. This also required Java Edition players to create Xbox network Gamertags. Mojang defended the move to Microsoft accounts by saying that improved security could be offered, including two-factor authentication, blocking cyberbullies in chat, and improved parental controls. The community responded with intense backlash, citing various technical difficulties encountered in the process and how account migration would be mandatory, even for those who do not play on servers. As of 10 March 2022, Microsoft required that all players migrate in order to maintain access the Java Edition of Minecraft. Mojang announced a deadline of 19 September 2023 for account migration, after which all legacy Mojang accounts became inaccessible and unable to be migrated. In June 2022, Mojang added a player-reporting feature in Java Edition. Players could report other players on multiplayer servers for sending messages prohibited by the Xbox Live Code of Conduct; report categories included profane language,[l] substance abuse, hate speech, threats of violence, and nudity. If a player was found to be in violation of Xbox Community Standards, they would be banned from all servers for a specific period of time or permanently. The update containing the report feature (1.19.1) was released on 27 July 2022. Mojang received substantial backlash and protest from community members, one of the most common complaints being that banned players would be forbidden from joining any server, even private ones. Others took issue to what they saw as Microsoft increasing control over its player base and exercising censorship, leading some to start a hashtag #saveminecraft and dub the version "1.19.84", a reference to the dystopian novel Nineteen Eighty-Four. The "Mob Vote" was an online event organized by Mojang in which the Minecraft community voted between three original mob concepts; initially, the winning mob was to be implemented in a future update, while the losing mobs were scrapped, though after the first mob vote this was changed, and losing mobs would now have a chance to come to the game in the future. The first Mob Vote was held during Minecon Earth 2017 and became an annual event starting with Minecraft Live 2020. The Mob Vote was often criticized for forcing players to choose one mob instead of implementing all three, causing divisions and flaming within the community, and potentially allowing internet bots and Minecraft content creators with large fanbases to conduct vote brigading. The Mob Vote was also blamed for a perceived lack of new content added to Minecraft since Microsoft's acquisition of Mojang in 2014. The 2023 Mob Vote featured three passive mobs—the crab, the penguin, and the armadillo—with voting scheduled to start on 13 October. In response, a Change.org petition was created on 6 October, demanding that Mojang eliminate the Mob Vote and instead implement all three mobs going forward. The petition received approximately 445,000 signatures by 13 October and was joined by calls to boycott the Mob Vote, as well as a partially tongue-in-cheek "revolutionary" propaganda campaign in which sympathizers created anti-Mojang and pro-boycott posters in the vein of real 20th century propaganda posters. Mojang did not release an official response to the boycott, and the Mob Vote otherwise proceeded normally, with the armadillo winning the vote. In September 2024, as part of a blog post detailing their future plans for Minecraft's development, Mojang announced the Mob Vote would be retired. Cultural impact In September 2019, The Guardian classified Minecraft as the best video game of the 21st century to date, and in November 2019, Polygon called it the "most important game of the decade" in its 2010s "decade in review". In June 2020, Minecraft was inducted into the World Video Game Hall of Fame. Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development. Social media sites such as YouTube, Facebook, and Reddit have played a significant role in popularizing Minecraft. Research conducted by the Annenberg School for Communication at the University of Pennsylvania showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. The game would go on to be a prominent fixture within YouTube's gaming scene during the entire 2010s; in 2014, it was the second-most searched term on the entire platform. By 2018, it was still YouTube's biggest game globally. Some popular commentators have received employment at Machinima, a now-defunct gaming video company that owned a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at Minecon 2011 had the highest attendance. Another well-known YouTube personality is Jordan Maron, known online as CaptainSparklez, who has also created many Minecraft music parodies, including "Revenge", a parody of Usher's "DJ Got Us Fallin' in Love". Minecraft's popularity on YouTube was described by Polygon as quietly dominant, although in 2019, thanks in part to PewDiePie's playthroughs of the game, Minecraft experienced a visible uptick in popularity on the platform. Longer-running series include Far Lands or Bust, dedicated to reaching the obsolete "Far Lands" glitch by foot on an older version of the game. YouTube announced that on 14 December 2021 that the total amount of Minecraft-related views on the website had exceeded one trillion. Minecraft has been referenced by other video games, such as Torchlight II, Team Fortress 2, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, and FTL: Faster Than Light. Minecraft is officially represented in downloadable content for the crossover fighter Super Smash Bros. Ultimate, with Steve as a playable character with a moveset including references to building, crafting, and redstone, alongside an Overworld-themed stage. It was also referenced by electronic music artist Deadmau5 in his performances. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. In 2025, A Minecraft Movie was released. It made $313 million in the box office in the first week, a record-breaking opening for a video game adaptation. Minecraft has been noted as a cultural touchstone for Generation Z, as many of the generation's members played the game at a young age. The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design (CAD) and education. In a panel at Minecon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap. In September 2012, Mojang began the Block by Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements and is in the planning phase. The Block by Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions. In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at 171 meters (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft was around 192 meters above in-game sea level when the project was completed. Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders has used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people. Despite its unpredictable nature, Minecraft speedrunning, where players time themselves from spawning into a new world to reaching The End and defeating the Ender Dragon boss, is popular. Some speedrunners use a combination of mods, external programs, and debug menus, while other runners play the game in a more vanilla or more consistency-oriented way. Minecraft has been used in educational settings through initiatives such as MinecraftEdu, founded in 2011 to make the game affordable and accessible for schools in collaboration with Mojang. MinecraftEdu provided features allowing teachers to monitor student progress, including screenshot submissions as evidence of lesson completion, and by 2012 reported that approximately 250,000 students worldwide had access to the platform. Mojang also developed Minecraft: Education Edition with pre-built lesson plans for up to 30 students in a closed environment. Educators have used Minecraft to teach subjects such as history, language arts, and science through custom-built environments, including reconstructions of historical landmarks and large-scale models of biological structures such as animal cells. The introduction of redstone blocks enabled the construction of functional virtual machines such as a hard drive and an 8-bit computer. Mods have been created to use these mechanics for teaching programming. In 2014, the British Museum announced a project to reproduce its building and exhibits in Minecraft in collaboration with the public. Microsoft and Code.org have offered Minecraft-based tutorials and activities designed to teach programming, reporting by 2018 that more than 85 million children had used their resources. In 2025, the Musée de Minéralogie in Paris held a temporary exhibition titled "Minerals in Minecraft." Following the initial surge in popularity of Minecraft in 2010, other video games were criticised for having various similarities to Minecraft, and some were described as being "clones", often due to a direct inspiration from Minecraft, or a superficial similarity. Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, BlockWorld 3D, Total Miner, and Luanti (formerly Minetest). David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system. In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms not to officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded, with official Minecraft releases on Nintendo consoles eventually resuming. Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011. In 2025, Persson announced through a poll on his X account that he was considering developing a spiritual successor to Minecraft. He later clarified that he was "100% serious", and that he had "basically announced Minecraft 2". Within days, however, Persson cancelled the plans after speaking to his team. In November 2024, artificial intelligence companies Decart and Etched released Oasis, an artificially generated version of Minecraft, as a proof of concept. Every in-game element is completely AI-generated in real time and the model does not store world data, leading to "hallucinations" such as items and blocks appearing that were not there before. In January 2026, indie game developer Unomelon announced that their voxel sandbox game Allumeria would be playable in Steam Next Fest that year. On 10 February, Mojang issued a DMCA takedown of Allumeria on Steam through Valve, alleging the game was infringing on Minecraft's copyright. Some reports suggested that the takedown may have used an automatic AI copyright claiming service. The DMCA was later withdrawn. Minecon was an annual official fan convention dedicated to Minecraft. The first full Minecon was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community. In 2016, Minecon was held in-person for the last time, with the following years featuring annual "Minecon Earth" livestreams on minecraft.net and YouTube instead. These livestreams, later rebranded to "Minecraft Live", included the mob/biome votes, and announcements of new game updates. In 2025, "Minecraft Live" became a biannual event as part of Minecraft's changing update schedule.[citation needed] Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/The_X-Files] | [TOKENS: 12278] |
Contents The X-Files The X-Files is an American science fiction drama television series created by Chris Carter. The original series aired from September 10, 1993, to May 19, 2002, on Fox, spanning nine seasons, with 202 episodes. A tenth season of six episodes ran from January to February 2016. Following the ratings success of this revival, The X-Files returned for an eleventh season of ten episodes, which ran from January to March 2018. In addition to the television series, two feature films have been released: the 1998 film The X-Files and the stand-alone film The X-Files: I Want to Believe, released in 2008, six years after the original television run ended. The series revolves around Federal Bureau of Investigation (FBI) Special Agents Fox Mulder (David Duchovny) and Dana Scully (Gillian Anderson), who investigate the eponymous "X-Files": marginalized, unsolved cases involving paranormal phenomena. Mulder is a skilled criminal profiler, an ardent supernaturalist, and a conspiracy theorist who believes in the existence of the paranormal, whereas Scully is a medical doctor and skeptic who has been assigned to scientifically analyze Mulder's case files. Early in the series, both agents apparently become pawns in a much larger conflict and come to trust only each other and select others. The agents discover what appears to be a governmental agenda to hide evidence of extraterrestrial life. Mulder and Scully's shared adventures initially lead them to develop a close platonic bond, which develops into a complex romantic relationship. Roughly one third of the series' episodes follow a complicated mythopoeia-driven story arc about a planned alien invasion, whereas the other two-thirds may be described as "monster of the week" episodes that focus on a single villain, mutant, or monster. The X-Files was inspired by earlier television series featuring elements of suspense, horror, and speculative science fiction, including The Twilight Zone, Night Gallery, Tales from the Darkside, Twin Peaks, and especially Kolchak: The Night Stalker. When creating the main characters, Carter sought to reverse gender stereotypes by making Mulder a believer and Scully a skeptic. The first seven seasons featured Duchovny and Anderson relatively equally. In the eighth and ninth seasons, Anderson took precedence while Duchovny appeared intermittently. New main characters were introduced: FBI Special Agents John Doggett (Robert Patrick) and Monica Reyes (Annabeth Gish), among others. Mulder and Scully's immediate superior, Assistant Director Walter Skinner (Mitch Pileggi), began to appear regularly. The first five seasons of The X-Files were filmed in Vancouver, British Columbia, before production eventually moved to Los Angeles, apparently to accommodate Duchovny's schedule. However, the series later returned to Vancouver with the filming of The X-Files: I Want to Believe as well as the tenth and eleventh seasons. The X-Files was a hit for the Fox network and received largely positive reviews, although its long-term story arc was criticized near the conclusion. Initially considered a cult series, it turned into a pop culture touchstone that tapped into public mistrust of governments and large institutions and embraced conspiracy theories and spirituality. Both the series and lead actors Duchovny and Anderson received multiple awards and nominations, and by its conclusion the show was the longest-running science fiction series in American television history. The series also spawned a franchise that includes spin-offs Millennium and The Lone Gunmen, two theatrical films, and accompanying merchandise. Premise The X-Files follows Federal Bureau of Investigation (FBI) Special Agents Fox Mulder (David Duchovny) and Dana Scully (Gillian Anderson). Special Agent Mulder is a talented profiler and conspiracy theorist, and an ardent supernaturalist. He is also adamant about the existence of intelligent extraterrestrial life and its presence on Earth. These beliefs earn him the nickname "Spooky Mulder" and an assignment to a little-known department that deals with unsolved cases, the X-Files. His belief in the paranormal springs from the claimed alien abduction of his sister Samantha Mulder when Mulder was 12. Her abduction drives Mulder throughout most of the series. Because of this, as well as more nebulous desires for vindication and the revelation of truths kept hidden by human authorities, Mulder struggles to maintain objectivity in his investigations. Special Agent Scully is a foil for Mulder in this regard. As a medical doctor and natural skeptic, Scully approaches cases with detachment, even when Mulder, despite his considerable training, loses his objectivity. She is partnered with Mulder initially so that she can debunk Mulder's nonconforming theories, often supplying logical, scientific explanations for the cases' apparently unexplainable phenomena. Although she is frequently able to offer scientific alternatives to Mulder's deductions, she is rarely able to refute them completely. Over the course of the series, she becomes increasingly dissatisfied with her own ability to approach the cases scientifically. After Mulder's abduction at the hands of aliens in the seventh season finale "Requiem", Scully becomes a "reluctant believer" who explains the paranormal with science. Various episodes also deal with the relationship between Mulder and Scully, originally platonic, but that later develops romantically. Mulder and Scully are joined by John Doggett (Robert Patrick) and Monica Reyes (Annabeth Gish) late in the series, after Mulder is abducted. Doggett replaces him as Scully's partner and helps her search for him, later involving Reyes, of whom Doggett had professional knowledge. The initial run of The X-Files ends when Mulder is secretly subjected to a military tribunal for breaking into the top-secret Mount Weather Emergency Operations Center and viewing plans for alien invasion and colonization of Earth. He is found guilty and sentenced to death but escapes punishment with the help of the other agents, and he and Scully become fugitives. Key episodes, known as the "mytharc", were recognized as the "mythology" of the series canon; these episodes carried the extraterrestrial/conspiracy storyline that evolved throughout the series. "Monster of the week"—often abbreviated as "MotW" or "MoW"—came to denote the remainder of The X-Files episodes. These episodes, forming the majority of the series, dealt with paranormal (and in certain cases, merely criminal) phenomena, including: serial killers (with or without supernatural powers), cryptids, ghosts, mutants, science fiction technology, horror monsters and religious phenomena. Some of the "monster of the week" episodes featured satiric elements and comedic story lines. The main story arc involves the agents' efforts to uncover a government conspiracy that covers up the existence of extraterrestrials and their sinister collaboration with said government. Mysterious men constituting a shadow element within the U.S. government, known as the Syndicate, are the major villains in the series; late in the series it is revealed that The Syndicate acts as the only liaison between mankind and a group of extraterrestrials that intends to destroy humanity. They are usually represented by the Cigarette Smoking Man (William B. Davis), a ruthless killer, masterful politician, negotiator, failed novelist, and the series' principal antagonist. As the series goes along, Mulder and Scully learn about evidence of the alien invasion piece by piece. It is revealed that the extraterrestrials plan on using a sentient virus, known as the black oil (also known as "Purity"), to infect mankind and turn the population of the world into a slave race. The Syndicate—having made a deal to be spared by the aliens—have been working to develop an alien-human hybrid that will be able to withstand the effects of the black oil. The group has also been secretly working on a vaccine to overcome the black oil; this vaccine is revealed in the latter parts of season five, as well as the 1998 film. Counter to the alien colonization effort, another faction of aliens, the faceless rebels, are working to stop alien colonization. Eventually, in the season six episodes "Two Fathers" and "One Son", the rebels manage to destroy the Syndicate. The colonists, now without human liaisons, dispatch the "Super Soldiers": beings that resemble humans, but are biologically alien. In the latter parts of season eight, and the whole of season nine, the Super Soldiers manage to replace key individuals in the government, forcing Mulder and Scully to go into hiding. Cast and characters Production Mulder and Scully came right out of my head. A dichotomy. They are the equal parts of my desire to believe in something and my inability to believe in something. My skepticism and my faith. And the writing of the characters came very easily to me. I want, like a lot of people do, to have the experience of witnessing a paranormal phenomenon. At the same time I want not to accept it, but to question it. I think those characters and those voices came out of that duality. California native Chris Carter was given the opportunity to produce new shows for the Fox network in the early 1990s. Carter was tired of the comedies he had been working on for Walt Disney Pictures. A report that said 3.7 million Americans believed they may have been abducted by aliens, the Watergate scandal, and the 1970s horror series Kolchak: The Night Stalker all contributed to trigger the idea for The X-Files. He wrote the pilot episode in 1992. Carter's initial pitch for The X-Files was rejected by Fox executives. He fleshed out the concept and returned a few weeks later, whereupon they commissioned the pilot. Carter worked with NYPD Blue producer Daniel Sackheim to further develop the pilot, drawing stylistic inspiration from the 1988 documentary The Thin Blue Line and the British television series Prime Suspect. Inspiration also came from Carter's memories of The Twilight Zone as well as from The Silence of the Lambs, which provided the impetus for framing the series around agents from the FBI, to provide the characters with a more plausible reason for being involved in each case than Carter believed was present in Kolchak. Carter was determined to keep the relationship between the two leads strictly platonic, basing their interactions on the characters of Emma Peel and John Steed in The Avengers series. The early 1990s series Twin Peaks was a major influence on the show's dark atmosphere and its often surreal blend of drama and irony. Duchovny had appeared as a transgender DEA agent in Twin Peaks and the Mulder character was seen as a parallel to that show's FBI Agent Dale Cooper. The producers and writers cited All the President's Men, Three Days of the Condor, Close Encounters of the Third Kind, Raiders of the Lost Ark, Rashomon, The Thing, The Boys from Brazil, The Silence of the Lambs and JFK as other influences. Episodes written by Darin Morgan often referred to or referenced other films. Duchovny had worked in Los Angeles for three years prior to The X-Files, focusing on feature films. In 1993 his manager Melanie Green gave him the script for the pilot episode of The X-Files. Green and Duchovny were both convinced it was a good script so he auditioned for the lead. Duchovny's audition was "terrific", though he talked rather slowly. While the casting director of the show was very positive toward him, Carter thought that he was not particularly intelligent. He asked Duchovny if he could "please" imagine himself as an FBI agent in "future" episodes. Duchovny, however, turned out to be one of the best-read people that Carter knew. Anderson auditioned for the part of Scully in 1993. "I couldn't put the script down", she recalled. For the role, the network wanted either a more established actress or one that was "taller, leggier, blonder and breastier" than the 24-year-old Anderson, a theater veteran with minor film experience. After auditions, Carter felt she was the only choice. Carter insisted that Anderson had the kind of "no-nonsense integrity that the role required." For portraying Scully, Anderson won numerous major awards: the Screen Actors Guild Award in 1996 and 1997, an Emmy Award in 1997, and a Golden Globe Award 1997. The character Walter Skinner was played by actor Mitch Pileggi, who had unsuccessfully auditioned for the roles of two or three other characters on The X-Files before getting the part. At first, being asked back to audition for the recurring role puzzled him, until he discovered the reason he had not previously been cast in those roles—Carter had been unable to envision Pileggi as any of those characters, because the actor had been shaving his head. When Pileggi auditioned for Walter Skinner, he had been in a grumpy mood and had allowed his hair to grow. His attitude fit well with Skinner's character, causing Carter to assume that the actor was only pretending to be grumpy. Pileggi later realized he had been lucky that he had not been cast in one of the earlier roles, as he believed he would have appeared in only a single episode and would have missed the opportunity to play the recurring role. Before the seventh season aired, Duchovny filed a lawsuit against 20th Century Fox, claiming that Fox had undersold the rights to its own affiliates, thereby costing him huge sums of money. Eventually, the lawsuit was settled, and Duchovny was awarded a settlement of about $20 million, but the lawsuit put strain on Duchovny's professional relationships. Neither Carter nor Duchovny was contracted to work on the series beyond the seventh season; however, Fox entered into negotiations near the end of that season to bring the two on board for an eighth season. After settling his contract dispute, Duchovny quit full-time participation in the show after the seventh season. This contributed to uncertainties over the likelihood of an eighth season. Carter and most fans felt the show was at its natural endpoint with Duchovny's departure, but it was decided that Mulder would be abducted at the end of the seventh season and would return in 12 episodes the following year. The producers then announced that a new character, John Doggett, would fill Mulder's role. More than 100 actors auditioned for the role of Doggett, but only about ten were seriously considered. Lou Diamond Phillips, Hart Bochner, and Bruce Campbell were among the ten. The producers chose Robert Patrick. Carter believed that the series could continue for another ten years with new leads, and the opening credits were accordingly redesigned in both seasons eight and nine to emphasize the new actors (along with Pileggi, who was finally listed as a main character). Doggett's presence did not give the series the ratings boost the network executives were hoping for. The eighth-season episode "This is Not Happening" marked the first appearance of Monica Reyes, played by Gish, who became a main character in season nine. Her character was developed and introduced due to Anderson's possible departure at the end of the eighth season. Although Anderson ultimately stayed through the ninth season, Gish became a series regular. Glen Morgan and James Wong's early influence on The X-Files mythology led to their introduction of popular secondary characters who continued for years in episodes written by others: Scully's father, William (Don S. Davis); her mother, Margaret (Sheila Larken); and her sister, Melissa (Melinda McGraw). The conspiracy-inspired trio The Lone Gunmen were also secondary characters. The trio was introduced in the first-season episode "E.B.E." as a way to make Mulder appear more credible. They were originally meant to appear in only that episode, but due to their popularity, they returned in the second-season episode "Blood" and became recurring characters. Cigarette Smoking Man, portrayed by William B. Davis, was initially cast as an extra in the pilot episode. His character, however, grew into the main antagonist. During the early stages of production, Carter founded Ten Thirteen Productions and began to plan for filming the pilot in Los Angeles. However, unable to find suitable locations for many scenes, he decided to "go where the good forests are" and moved production to Vancouver. It was soon realized by the production crew that since so much of the first season would require filming on location, rather than on sound stages, a second location manager would be needed. The show remained in Vancouver for the first five seasons; production then shifted to Los Angeles beginning with the sixth season. Duchovny was unhappy over his geographical separation from his wife, Téa Leoni, although his discontent was popularly attributed to frustration with Vancouver's persistent rain. Anderson also wanted to return to the United States, and Carter relented following the fifth season. The season ended in May 1998 with "The End", the final episode shot in Vancouver and the final episode with the involvement of many of the original crew members, including director and producer R.W. Goodwin and his wife Sheila Larken, who played Margaret Scully and would later return briefly. With the move to Los Angeles, many changes behind the scenes occurred, as much of the original The X-Files crew was gone. New production designer Corey Kaplan, editor Lynne Willingham, writer David Amann and director and producer Michael Watkins joined and stayed for several years. Bill Roe became the show's new director of photography and episodes generally had a drier, brighter look due to California's sunshine and climate, as compared with Vancouver's rain, fog and temperate forests. Early in the sixth season, the producers took advantage of the new location, setting the show in new parts of the country. For example, Vince Gilligan's "Drive", about a man subject to an unexplained illness, was a frenetic action episode, unusual for The X-Files largely because it was set in Nevada's stark desert roads. The "Dreamland" two-part episode was also set in Nevada, this time in Area 51. The episode was largely filmed at "Club Ed", a movie ranch located on the outskirts of Lancaster, California. Although the sixth through ninth seasons were filmed in Los Angeles, the series' second movie, The X-Files: I Want to Believe (2008), was filmed in Vancouver, According to Spotnitz, the film's script was written for the city and surrounding areas. The 2016 revival was also shot there. The music was composed by Mark Snow, who got involved with The X-Files through his friendship with executive producer Goodwin. Initially Carter had no candidates. A little over a dozen people were considered, but Goodwin continued to press for Snow, who auditioned around three times with no sign from the production staff as to whether they wanted him. One day, however, Snow's agent called him, talking about the "pilot episode" and hinting that he had got the job. The theme, "The X-Files", used more instrumental sections than most dramas. The theme song's famous whistle effect was inspired by the track "How Soon Is Now?" from the US edition of The Smiths' 1985 album Meat Is Murder. After attempting to craft the theme with different sound effects, Snow used a Proteus 2 rackmount sound module with a preset sound called "Whistl'n Joe". After hearing this sound, Carter was "taken aback" and noted it was "going to be good". According to the "Behind the Truth" segment on the first season DVD, Snow created the echo effect on the track by accident. He felt that after several revisions, something still was not right. Carter walked out of the room and Snow put his hand and forearm on his keyboard in frustration. By doing so, he accidentally activated an echo effect setting. The resulting riff pleased Carter; Snow said, "this sound was in the keyboard. And that was it." The second episode, "Deep Throat", marked Snow's debut as solo composer for an entire episode. The production crew was determined to limit the music in the early episodes. Likewise, the theme song itself first appeared in "Deep Throat". Snow was tasked with composing the score for both The X-Files films. The films marked the first appearance of real orchestral instruments; previous music had been crafted by Snow using digitally sampled instrument sounds. Snow's soundtrack for the first film, The X-Files: Original Motion Picture Score, was released in 1998. For the second film, Snow recorded with the Hollywood Studio Symphony in May 2008 at the Newman Scoring Stage at 20th Century Fox in Century City. UNKLE recorded a new version of the theme music for the end credits. Some of the unusual sounds were created by a variation of silly putty and dimes tucked into piano strings. Snow commented that the fast percussion featured in some tracks was inspired by the track "Prospectors Quartet" from the There Will Be Blood soundtrack. The soundtrack score, The X-Files: I Want to Believe, was released in 2008. The opening sequence was made in 1993 for the first season, and remained unchanged until Duchovny left the show. Carter sought to make the title an "impactful opening" with "supernatural images". These scenes notably include a split-screen image of a seed germinating and a "terror-filled, warped face". The latter was created when Carter found a video operator who was able to create the effect. The sequence was extremely popular and won the show its first Emmy Award, which was for Outstanding Graphic Design and Title Sequences. Producer Paul Rabwin was particularly pleased with the sequence, and felt that it was something that had "never [been] seen on television before". In 2017, James Charisma of Paste ranked the show's opening sequence #8 on a list of The 75 Best TV Title Sequences of All Time. The premiere episode of season eight, "Within", revealed the first major change to the opening credits. Along with Patrick, the sequence used new images and updated photos for Duchovny and Anderson, although Duchovny only appears in the opening credits when he appears in an episode. Carter and the production staff saw Duchovny's departure as a chance to change things. The replacement shows various pictures of Scully's pregnancy. According to executive producer Frank Spotnitz, the sequence also features an "abstract" way of showing Mulder's absence in the eighth season: he falls into an eye. Season nine featured an entirely new sequence. Since Anderson wanted to move on, the sequence featured Reyes and Skinner. Duchovny's return to the show for the ninth-season finale, "The Truth" marked the largest number of cast members to be featured in the opening credits, with five. The revival seasons use the series' original opening credits sequence. The sequence ends with the tagline "The Truth Is Out There", which is used for the majority of the episodes. For certain episodes, the tagline was changed to be more thematically-relevant; a list of the episodes that received alternate taglines is as follows: Broadcast and release The pilot premiered on September 10, 1993, and reached 12 million viewers. As the season progressed, ratings began to increase and the season finale garnered 14 million viewers. The first season ranked 105th out of 128 shows during the 1993–94 television season. The series' second season increased in ratings—a trend that would continue for the next three seasons—and finished 63rd out of 141 shows. These ratings were not spectacular, but the series had attracted enough fans to receive the label "cult hit", particularly by Fox standards. Most importantly, it made great gains among the 18-to-49 age demographic sought by advertisers. During its third year, the series ranked 55th and was viewed by an average of 15.40 million viewers, an increase of almost seven percent over the second season, making it Fox's top-rated program in the 18–49-year-old demographic. Although the first three episodes of the fourth season aired on Friday night, the fourth episode "Unruhe" aired on Sunday night. The show remained on Sunday until its end. The season hit a high with its twelfth episode, "Leonard Betts", which was chosen as the lead-out program following Super Bowl XXXI. The episode was viewed by 29.1 million viewers, the series' highest-rated episode. The fifth season debuted with "Redux I" on November 2, 1997, and was viewed by 27.34 million people, making it the highest-rated non-special broadcast episode of the series. The season ranked as the eleventh-most watched series during the 1997–98 year, with an average of 19.8 million viewers. It was the series' highest-rated season as well as Fox' highest-rated program during the 1997–98 season. The sixth season premiered with "The Beginning", watched by 20.24 million viewers. The show ended season six with lower numbers than the previous season, beginning a decline that would continue for the show's final three years. The X-Files was nevertheless Fox's highest-rated show that year. The seventh season, originally intended as the show's last, ranked as the 29th most-watched show for the 1999–2000 year, with 14.20 million viewers. This made it, at the time, the lowest-rated year of the show since the third season. The first episode of season eight, "Within", was viewed by 15.87 million viewers. The episode marked an 11% decrease from the seventh season opener, "The Sixth Extinction". The first part of the ninth season opener, "Nothing Important Happened Today", only attracted 10.6 million viewers, the series' lowest-rated season premiere. The original series finale, "The Truth", attracted 13.25 million viewers, the series' lowest rated season finale. The ninth season was the 63rd most-watched show for the 2001–02 season, tying its season two rank. On May 19, 2002, the finale aired and the Fox network confirmed that The X-Files was over. When talking about the beginning of the ninth season, Carter said, "We lost our audience on the first episode. It's like the audience had gone away and I didn't know how to find them. I didn't want to work to get them back because I believed what we are doing deserved to have them back." While news outlets cited declining ratings because of lackluster stories and poor writing, The X-Files production crew blamed September 11 terrorist attacks as the main factor. At the end of 2002, The X-Files had become the longest-running consecutive science fiction series ever on American broadcast television. This record was later surpassed by Stargate SG-1 in 2007 and Smallville in 2011. The debut episode of the 2016 revival, "My Struggle", first aired on January 24, 2016, and was watched by 16.19 million viewers. In terms of viewers, this made it the highest-rated episode of The X-Files to air since the eighth-season episode "This Is Not Happening" in 2001, which was watched by 16.9 million viewers. When DVR and streaming are taken into account, "My Struggle" was seen by 21.4 million viewers, scoring a 7.1 Nielsen rating. The season ended with "My Struggle II", which was viewed by 7.60 million viewers. In total, the season was viewed by an average of 13.6 million viewers; it ranked as the seventh most-watched television series of the 2015–16 year, making it the highest-ranked season of The X-Files to ever air. A few years later, the premiere episode of the eleventh season, "My Struggle III", was watched by 5.15 million viewers. This was a decrease from the previous season's debut; it was also the lowest-rated premiere for any season of the show. The season concluded with "My Struggle IV", which was seen by 3.43 million viewers, which was also a decrease from the previous season. "My Struggle IV", which became the de facto finale for the series, was also the show's lowest-rated finale. In total, the season was viewed by an average of 5.34 million viewers, and it ranked as the 91st most-watched television series of the 2018–19 year. According to the streaming aggregator JustWatch, The X-Files was the ninth most streamed television series across all platforms in the United States, during the week ending November 7, 2021. After several successful seasons, Carter wanted to tell the story of the series on a wider scale, which ultimately turned into a feature film. He later explained that the main problem was to create a story that would not require the viewer to be familiar with the broadcast series. The movie was filmed in the hiatus between the show's fourth and fifth seasons and re-shoots were conducted during the filming of the show's fifth season. Due to the demands on the actors' schedules, some episodes of the fifth season focused on just one of the two leads. On June 19, 1998, the eponymous The X-Files, also known as The X-Files: Fight the Future was released. The crew intended the movie to be a continuation of the season five finale "The End", but it was also meant to stand on its own. The season six premiere, "The Beginning", began where the film ended. The film was written by Carter and Spotnitz and directed by series regular Rob Bowman. In addition to Mulder, Scully, Skinner and Cigarette Smoking Man, it featured guest appearances by Martin Landau, Armin Mueller-Stahl and Blythe Danner, who appeared only in the film. It also featured the last appearance of John Neville as the Well-Manicured Man. Jeffrey Spender, Diana Fowley, Alex Krycek and Gibson Praise—characters who had been introduced in the fifth-season finale and/or were integral to the television series—do not appear in the film. Although the film had a strong domestic opening and received mostly positive reviews from critics, attendance dropped sharply after the first weekend. Although it failed to make a profit during its theatrical release—due in part to its large promotional budget—The X-Files film was more successful internationally. Eventually, the worldwide theatrical box office total reached $189 million. The film's production cost and ad budgets were each close to $66 million. Unlike in the series, Anderson and Duchovny received equal pay for the film. In November 2001, Carter decided to pursue a second film adaptation. Production was slated to begin after the ninth season, with a projected release in December 2003. In April 2002, Carter reiterated his desire and the studio's desire to do a sequel film. He planned to write the script over the summer and begin production in spring or summer 2003 for a 2004 release. Carter described the film as independent of the series, saying, "We're looking at the movies as stand-alones. They're not necessarily going to have to deal with the mythology." Bowman, who had directed various episodes of The X-Files in the past as well as the 1998 film, expressed an interest in the sequel, but Carter took the job. Spotnitz co-authored the script with Carter. The X-Files: I Want to Believe became the second film based on the series, after 1998's The X-Files: Fight the Future. Filming began in December 2007 in Vancouver and finished on March 11, 2008. The film was released in the United States on July 25, 2008, grossing $4 million on its opening day. It opened fourth on the U.S. weekend box office chart, with a gross of $10.2 million. By the end of its theatrical run, it had grossed $20,982,478 domestically and an additional $47,373,805 internationally, for a total worldwide gross of $68,369,434. Among 2008 domestic releases, it finished in 114th place. The film's stars both claimed that the timing of the movie's release, a week after the highly popular Batman film The Dark Knight, negatively affected its success. The film received mixed to negative reviews. Metacritic, which assigns a rating out of 100 reviews from mainstream film critics, reported "mixed or average" reviews, with an average score of 47 based on 33 reviews. Rotten Tomatoes reported that 32% of 160 listed film critics gave the film a positive review, with an average rating of 4.9 out of 10. The website wrote of the critics' consensus, stating, "The chemistry between leads David Duchovny and Gillian Anderson do live [sic] up to The X-Files' televised legacy, but the roving plot and droning routines make it hard to identify just what we're meant to believe in." In several interviews around the release, Carter said that if the X-Files: I Want to Believe film proved successful at the box office, a third installment would be made going back to the TV series' mythology, focusing specifically on the alien invasion and colonization of Earth foretold in the ninth-season finale, due to occur on December 22, 2012. In an October 2009 interview, David Duchovny likewise said he wanted to do a 2012 X-Files movie, but did not know if he would get the chance. Anderson stated in August 2012 that a third X-Files film is "looking pretty good". As of July 2013, Fox had not approved the movie, although Carter, Spotnitz, Duchovny and Anderson expressed interest. At the New York Comic Con held October 10–13, 2013, Duchovny and Anderson reaffirmed that they and Carter were interested in making a third film, with Anderson saying, "If it takes fan encouragement to get Fox interested in that, then I guess that's what it would be." On January 17, 2015, Fox confirmed that they were looking at the possibility of bringing The X-Files back, not as a movie, but as a limited run television season. Fox chairman Dana Walden told reporters that "conversations so far have only been logistical and are in very early stages", and that the series would only go forward if Carter, Anderson, and Duchovny were all on board, and that it was a matter of ensuring all of their timetables are open. On March 24, 2015, it was confirmed the series would return with series creator Chris Carter and lead actors David Duchovny and Gillian Anderson. It premiered on January 24, 2016. A year later, on April 20, 2017, Fox officially announced that The X-Files would be returning for an eleventh season of ten episodes, which premiered on January 3, 2018. In January 2018, Gillian Anderson confirmed that season 11 would be her final season of The X-Files. The following month, Carter stated in an interview that he could see the show continuing without Anderson. In May 2018, Fox's co-CEO Gary Newman commented that "there are no plans to do another season at the moment." In October 2020, Chris Carter said: "I always thought there would be even more X-Files." He admitted that continuing the series at this point with Duchovny and Anderson is unlikely but had plans to continue the franchise with an upcoming animated spinoff. "Being that Gillian has decided to move on with her career, we certainly couldn't do Mulder and Scully again. But that's not to say there isn't another way to do The X-Files. And so right now I think the future is unwritten." The rights to the concept have been owned Disney since its acquisition of Fox in 2017. In August 2020, Fox announced that an animated comedy-oriented reboot series was in development, under the working title The X-Files: Albuquerque. In March 2023, it was confirmed the series would not be moving forward. In March 2023, it was reported that Ryan Coogler was developing a new reboot of the series, per series creator Chris Carter. In February 2024, Carter confirmed he is not involved with its production. In April 2025, Coogler said the X-Files reboot would be his next project after the film Sinners, and he began working on the project by October of that year; Coogler said that he chose to work on the project due to his mother's love for the original series. In December 2025, Coogler noted the new series would follow the original, stating "We intend on having both monsters of the week and also the overarching conspiracy". Home media On September 24, 1996, the first "wave" set of The X-Files VHS tapes were released. Wave sets were released covering the first through fourth seasons. Each "wave" was three VHS tapes, each containing two episodes, for a total of six episodes per wave and two waves per season. For example, the home video release of wave one drew from the first half of the first season: "Pilot"/"Deep Throat", "Conduit"/"Ice" and "Fallen Angel"/"Eve". Each wave was also available in a boxed set. Unlike later DVD season releases, the tapes did not include every episode from the seasons. Ultimately twelve episodes—approximately half the total number aired—were selected by Carter to represent each season, including nearly all "mythology arc" episodes and selected standalone episodes. Carter briefly introduced each episode with an explanation of why the episode was chosen and anecdotes from the set. These clips were later included on the full season DVDs. Wave eight, covering the last part of the fourth season, was the last to be released. No Carter interviews appeared on DVDs for later seasons. Many of the waves had collectible cards for each episode. All nine seasons were released on DVD along with the two films. Seasons 1 to 4 were in fullscreen and seasons 5 and onward were in widescreen with the top and bottom of the opening credits cropped off. It is not widely known how accurate this is to the original broadcasts. The entire series was re-released on DVD in early 2006, in a "slimmer" package. The first five slim case versions did not come with some bonus materials that were featured in the original fold-out versions. However, seasons six, seven, eight and nine all contained the bonus materials found in the original versions. Episodic DVDs have also been released in Region 2, such as "Deadalive", "Existence", "Nothing Important Happened Today", "Providence" and "The Truth". Various other episodes were released on DVD and VHS. In 2005, four DVD sets were released containing the main story arc episodes of The X-Files. The four being Volume 1 – Abduction, Volume 2 – Black Oil, Volume 3 – Colonization and Volume 4 – Super Soldiers. A boxed set containing all nine seasons and the first film was made available in 2007, which contains all of the special features from the initial releases. The set also includes an additional disc of new bonus features and various collectibles, including a poster for the first film, a comic book, a set of collector cards and a guide to all 202 episodes across all nine seasons and the first film. Due to the fact that the set was released in 2007, the second film, which was released in 2008, is not included. Release of The X-Files' seasons on Blu-ray, restored in high-definition, was rumored to begin in late 2013. The German TV channel ProSieben Maxx began airing first-season episodes reformatted in widescreen and in high-definition on January 20, 2014. On April 23, 2015, Netflix began streaming episodes of The X-Files in high definition, marking the first time that the series has been made available in the high resolution format in North America. In October 2015, it was confirmed that the complete series would be reissued on Blu-ray, and the full set was released on December 8, 2015. The set was criticized for using the wrong fonts for the title sequence and season 8 was affected by color balance issues making the picture appear darker in most episodes (an issue known as "black crush"). These issues led to Fox offering corrected discs and eventually issuing new sets with the correct color balance. Spin-offs The Lone Gunmen is an American science fiction television series created by Carter and broadcast on Fox and was crafted as a more humorous spin-off of The X-Files. The series starred the eponymous Lone Gunmen and was first broadcast in March 2001, during The X-Files's month-long hiatus. Although the debut episode garnered 13.23 million viewers, its ratings began to steadily drop. The program was cancelled after thirteen episodes. The last episode was broadcast in June 2001 and ended on a cliffhanger which was partially resolved in a ninth-season episode of The X-Files titled "Jump the Shark", included in the DVD release of the series. The X-Files was converted into a comic book series published by Topps Comics during the show's third and fourth seasons. The initial comic books were written solely by Stefan Petrucha. According to Petrucha, there were three types of stories: "those that dealt with the characters, those that dealt with the conspiracy, and the monster-of-the-week sort of stuff". Petrucha cited the latter as the easiest to write. Petrucha saw Scully as a "scientist [...] with real world faith", and that the difference between [Mulder and Scully] is not that Mulder believes and Scully doesn't; it's more a difference in procedure." In this manner, Mulder's viewpoint was often written to be just as valid as Scully's, and Scully's science was often portrayed to be just as convincing as Mulder's more outlandish ideas. Petrucha was eventually fired and various other authors took up the job. Topps published 41 regular issues of The X-Files from 1995–98. A crossover graphic novel between The X-Files and 30 Days of Night was published by WildStorm in 2010. It follows Mulder and Scully to Alaska as they investigate a series of murders that may be linked to vampires. In 2013, it was announced that The X-Files would return to comic book form with Season 10, now published by IDW. The series, which follows Mulder and Scully after the events of The X-Files: I Want to Believe, was released in June 2013. Joe Harris wrote the series, and Michael Walsh and Jordie Bellaire provided the artwork. It was later announced that Carter himself would be the executive producer for the series and would be "providing feedback to the creative team regarding scripts and outlines to keep the new stories in line with existing and on-going canon." The series restarted the series' mythology, and the first arc of the story focused on "seek[ing] to bring the mythology of the Alien Conspiracy back up to date in a more paranoid, post-terror, post-WikiLeaks society." In addition, sequels to popular "monster of the week" episodes were made. The X-Files Season 10 concluded on July 1, 2015, after 25 issues. In August 2015, the X-Files Season 11 comic book began, also published by IDW. The eight-issue series served as a continuation of the TV show. Chris Carter was the Executive Producer of the comic book series, while the issues were written by Joe Harris and illustrated by Matthew Dow Smith and Jordie Bellaire. Reception The X-Files received positive reviews from television critics, with many calling it one of the best series that aired on American television in the 1990s. Ian Burrell from the British newspaper The Independent called the show "one of the greatest cult shows in modern television". Richard Corliss from Time magazine called the show the "cultural touchstone of" the 1990s. Hal Boedeker from the Orlando Sentinel said in 1996 that the series had grown from a cult favorite to a television "classic". The Evening Herald said the show had "overwhelming influence" on television, in front of such shows as The Simpsons. In 2012, Entertainment Weekly listed the show at #4 in the "25 Best Cult TV Shows from the Past 25 Years", describing it as "a paean to oddballs, sci-fi fans, conspiracy theorists and Area 51 pilgrims everywhere. Ratings improved every year for the first five seasons, while Mulder and Scully's believer-versus-skeptic dynamic created a TV template that's still in heavy use today." In 2004 and 2007, The X-Files ranked #2 on TV Guide's "Top Cult Shows Ever". In 2002, the show ranked as the 37th best television show of all time. In 1997, the episodes "Clyde Bruckman's Final Repose" and "Small Potatoes" respectively ranked #10 and #72 on "TV Guide's 100 Greatest Episodes of All Time". In 2013, TV Guide included it in its list of the "60 Greatest Dramas of All Time" and ranked it as the #4 science fiction show and the #25 best series of all time. In 2007, Time included it on a list of the "100 Best TV Shows of All Time". In 2008, Entertainment Weekly named it the fourth-best piece of science fiction media, the fourth best TV show in the last 25 years and in 2009, named it the fourth-best in their list of the "20 Greatest Sci-fi TV Shows" in history. Empire magazine ranked The X-Files ninth best TV show in history, further claiming that the best episode was the third season entry "Jose Chung's From Outer Space". In 2013, the Writers Guild of America ranked The X-Files #26 on their list of the 101 Best Written TV Series. In 2015, on The Hollywood Reporter's entertainment-industry ranked TV list "Hollywood's 100 Favorite TV Shows", The X-Files appeared at #3. According to The Guardian, MediaDNA research discovered that The X-Files was on top of the list of the most innovative TV brands. In 2009, it was announced that the show's catchphrase "The Truth Is Out There" was among Britain's top 60 best-known slogans and quotes. The X-Files has been criticized for being unscientific and privileging paranormal and supernatural ideas (e.g. the hypotheses made by Mulder). For instance, in 1998, Richard Dawkins wrote that "The X-Files systematically purveys an anti-rational view of the world which, by virtue of its recurrent persistence, is insidious." The X-Files received prestigious awards over its nine-year run, totaling 62 Emmy nominations and 16 awards. Capping its successful first season, The X-Files crew members James Castle, Bruce Bryant and Carol Johnsen won the Emmy Award for Outstanding Individual Achievement in Graphic Design and Title Sequences in 1994. In 1995, the show was nominated for seven Emmy Awards with one win. The following year, the show won five Emmys out of eight nominations, including Darin Morgan for Outstanding Writing for a Drama Series. In 1997, The X-Files won three awards out of twelve, including Gillian Anderson for Outstanding Lead Actress in a Drama Series. In 1998, the show won one of fifteen. In 1999, it won one out of eight in the category for Outstanding Makeup for a Series. Season seven won three Emmys from six nominations. The following season would not be as successful, catching only two nominations and winning again in the Makeup category for "Deadalive". The ninth season received one nomination in Outstanding Music Composition for a Series (Dramatic Underscore). The show was nominated for 12 Golden Globe Awards overall, winning five. The first nomination came in 1994, when the show won Best Series – Drama. The following year, Anderson and Duchovny were nominated for Best Actor in a Leading Role and Best Actress in a Leading Role, respectively. In 1997, the series won three awards; Anderson and Duchovny for Best Actress and Actor and for Best Series – Drama. In 1998 and 1999, the show received the same three nominations. In 1998 the series won Best Series – Drama". In 1999, the series won no award and received no nominations thereafter. The show was nominated for 14 SAG Awards overall, winning twice. In 1996 and 1997, Anderson won for Outstanding Performance by a Female Actor in a Drama Series. In 1996, the show won a Peabody Award for being able "to convey ideas that are both entertaining and thought-provoking". The show has also been nominated for two American Cinema Editors awards, three Directors Guild of America Awards, nine Television Critics Association Awards and two Writers Guild of American Awards. The X-Files was nominated for nine Satellite Awards, winning two, and two Young Artist Awards, winning one. Influence As The X-Files saw its viewership expand from a "small, but devoted" group of fans to a worldwide mass cult audience, digital telecommunications were becoming mainstream. According to The New York Times, "this may have been the first show to find its audience growth tied to the growth of the Internet". Fans of the show became commonly known as "X-Philes", a term coined from the Greek root "-phil-" meaning love or obsession. X-Philes reviewed episodes on unofficial websites, formed communities with other fans through Usenet newsgroups and listservs, and wrote their own fan fiction. The X-Files also "caught on with viewers who wouldn't ordinarily consider themselves sci-fi fans". While Carter argued that the show was plot-driven, many fans saw it as character-driven. Duchovny and Anderson were characterized as "Internet sex symbols". As the show grew in popularity, subgroups of fans developed, such as "shippers", hoping for a romantic or sexual partnership between Mulder and Scully, or those who already perceived one between the lines. The usage of the term "ship" in its relationship sense appears to have been originated by Internet fans of The X-Files. Other groups arose to pay tribute to the stars or their characters, while others joined the subculture of "slash" fiction. In the summer of 1996, a journalist wrote, "There are entire forums online devoted to the 'M/S' [Mulder and Scully] relationship." In addition to "MOTW", Internet fans invented acronyms such as "UST", meaning "unresolved sexual tension", and "COTR", standing for "conversation on the rock"—referencing a popular scene in the third-season episode "Quagmire"—to aid in their discussions of the agents' relationship, which was itself identified as the "MSR". The producers did not endorse some fans' readings, according to a study on the subject: Not content to allow Shippers to perceive what they wish, Carter has consistently reassured NoRomos [those against the idea of a Mulder/Scully romance] that theirs is the preferred reading. This allows him the plausible deniability to credit the show's success to his original plan even though many watched in anticipation of a romance, thanks, in part, to his strategic polysemy. He can deny that these fans had reason to do so, however, since he has repeatedly stated that a romance was not and would never be. The Scully-obsessed writer in Carter's 1999 episode "Milagro" was read by some as his alter ego, realizing that by this point "she has fallen for Mulder despite his authorial intent". The writers sometimes paid tribute to the more visible fans by naming minor characters after them. For example, Leyla Harrison, played by Jolie Jenkins and introduced in the eighth-season episode "Alone", was created and named in memory of an Internet fan and prolific writer of fan fiction of the same name, who died of cancer on February 10, 2001. The X-Files spawned an industry of spin-off products. In 2004, U.S.-based Topps Comics, and most recently DC Comics imprint Wildstorm, launched a new series of licensed tie-in comics. During the series run, the Fox Broadcasting Company published the official The X-Files Magazine. The X-Files Collectible Card Game was released in 1996, and an expansion set was released in 1997. The X-Files has inspired four video games. In 1997, Fox Interactive released The X-Files: Unrestricted Access, a game-style database for Windows and Mac, which allowed users access to every case file. In 1998, The X-Files Game was released for the PC and Macintosh and a year later for the PlayStation. This game is set within the timeline of the second or third season and follows an Agent Craig Willmore in his search for the missing Mulder and Scully. In 2004, The X-Files: Resist or Serve was released. The survival-horror game for the PlayStation 2 is an original story set in the seventh season. It allows the player control of both Mulder and Scully. Both games feature acting and voice work from members of the series' cast. A mobile mystery investigation game The X-Files: Deep State was released in February 2018. The story of the game takes place between seasons 9 and 10 of the show and follows two FBI agents, Casey Winter and Garret Dale, as they investigate a conspiracy. A six-player pinball game, The X-Files, was produced by Sega in 1997. The X-Files directly inspired other TV series, including Strange World, The Burning Zone, Special Unit 2, Mysterious Ways, Lost, Dark Skies, The Visitor, Fringe, Warehouse 13, Supernatural, and Gravity Falls, with key aspects carried over to more standard crime dramas, such as Eleventh Hour and Bones. The influence can be seen on other levels: television series such as Lost developed their own complex mythologies. In terms of characterization, the role of Dana Scully was seen as innovative, changing "how women [on television] were not just perceived but behaved" and perhaps influencing the portrayal of other "strong women" investigators. Russell T Davies said The X-Files had been an inspiration on his series Torchwood, describing it as "dark, wild and sexy... The X-Files meets This Life". Other shows have been influenced by the tone and mood of The X-Files. For example, Buffy the Vampire Slayer drew from the mood and coloring of The X-Files, as well as from its occasional blend of horror and humor; creator Joss Whedon described his show as "a cross between The X-Files and My So-Called Life". The X-Files's great popularity led to it becoming a touchstone of popular culture. The show was parodied in The Simpsons season eight episode "The Springfield Files", which aired on January 12, 1997. In it, Mulder and Scully—voiced by Duchovny and Anderson—are sent to Springfield to investigate an alien sighting by Homer Simpson, but end up finding no evidence other than Homer's word and depart. Cigarette Smoking Man appears in the background when Homer is interviewed, and the show's theme plays during one scene. Nathan Ditum from Total Film ranked Duchovny and Anderson's performances as the fourth-best guest appearances in The Simpsons history. In the Star Trek: Deep Space Nine episode "Trials and Tribble-ations", Benjamin Sisko is interviewed by Federation Department of Temporal Investigations agents Dulmer and Lucsly, anagrams of Mulder and Scully, respectively. The pair were later expanded upon in Christopher L. Bennett's book Watching the Clock. The X-Files has also been parodied or referenced in shows such as 3rd Rock from the Sun, Archer, NewsRadio, American Horror Story, The Big Bang Theory, Bones, Breaking Bad, Californication (which stars David Duchovny), Supernatural, Castle, Family Guy, Hey Arnold!, King of the Hill, South Park, and Two and a Half Men. It also inspired themes in video games Deus Ex and Perfect Dark. In the musical realm, the British band Catatonia released the single "Mulder and Scully", which became a top ten hit on the UK Singles Chart in 1998. American singer and songwriter Bree Sharp wrote a song in 1999 called "David Duchovny" about the actor that heavily references the show. Although never a mainstream hit, the song became popular underground and gained a cult following. Finnish band Sonata Arctica released, in 1999, "Letter to Dana", in which the title character, Dana O'Hara, is named after Scully. The series has also been referenced in songs such as "The Bad Touch" by the Bloodhound Gang, "A Change" by Sheryl Crow, "Year 2000" by Xzibit, and "One Week" by Barenaked Ladies. Carter, Duchovny and Anderson celebrated the 20th anniversary of the series at a July 18, 2013, panel at San Diego Comic-Con hosted by TV Guide. During the discussion, Anderson discussed Scully's influence on female fans, relating that a number of women have informed her that they pursued physics careers because of the character. Anderson also indicated that she was not in favor of an X-Files miniseries, and Duchovny ruled out working with her on an unrelated project, but both expressed willingness to do a third feature film. Carter was more reserved at the idea, stating, "You need a reason to get excited about going on and doing it again." On July 16, 2008, Carter and Spotnitz donated several props from the series and new film to the Smithsonian's National Museum of American History, including the original pilot script and the "I Want to Believe" poster from Mulder's office. In a 2018 interview with The Straits Times, series' writers Jim Wong and Glenn Morgan acknowledged that the show likely played a role in bringing conspiracy theories to a mainstream audience, helping to erode trust in public institutions. Similarly, in a 2021 New York Times op-ed, series creator Chris Carter wrote: "'The Truth Is Out There', 'Trust No One', 'Deny Everything' went the provocative catchphrases on The X-Files, but that was in the '90s, when we had a relatively shared reality. The slogans are now a fact of life." Vanity Fair writer Jordan Hoffman suggested that Carter's op-ed was imbued with "a bit of a mea culpa vibe". References External links |
======================================== |
[SOURCE: https://github.com/trust-center] | [TOKENS: 1081] |
Navigation Menu Search code, repositories, users, issues, pull requests... Provide feedback We read every piece of feedback, and take your input very seriously. Saved searches Use saved searches to filter your results more quickly To see all available qualifiers, see our documentation. Trust at GitHub Develop, scale, and innovate securely with GitHub. Learn about our products We enable developers and organizations to maximize their potential by prioritizing security, privacy, compliance, and transparency as we develop and iterate on GitHub Copilot. A scalable and secure cloud-based unified software development platform that meets the demands of the world’s leading organizations. The world’s most adopted AI-powered developer platform Boost your software security with access to GitHub's comprehensive vulnerability database. Fix security vulnerabilities 7x faster than the industry average with GitHub Advanced Security. Learn about new vulnerabilities, insecure patterns, and security practices from our experts. GitHub is committed to the advancement of safe, secure, and trustworthy AI. We believe in the power of AI to enhance efficiency and innovation across the software development life cycle to increase developer happiness. From GitHub Copilot to hosting open source models, GitHub continues to ensure that AI advancements are accessible and beneficial to all. GitHub follows Microsoft's Responsible AI Standard in designing, building, and testing its AI systems in our products. The Microsoft RAI Standard has six principles–accountability, transparency, fairness, reliability & safety, privacy & security, and inclusiveness–that product teams consider in order to responsibly develop and deploy generative AI systems. These principles align with our commitment to develop safe, secure, and trustworthy AI systems. Furthermore, the implementation of these six principles align with the National Institute for Standards and Technology (NIST) AI Risk Management Framework (RMF) – govern, map, measure, and manage. Within GitHub, our Responsible AI Champions help map, measure, and manage risks associated with using generative AI in coding. GitHub has completed a Responsible AI Impact Assessment as well as security and privacy reviews to map different risks associated with its AI products. You can learn more by reading the Responsible AI Transparency Report. The report includes a case study on how GitHub mapped and managed key risks and measured the effectiveness of GitHub Copilot and Copilot Chat on developer productivity. At the heart of our work is a deep respect for your privacy, guiding the decisions we make. We prioritize your trust, control, and the context of your data to ensure your rights are always protected. GitHub has four privacy principles: Privacy Protects People. Privacy requires Trust, Control, and Transparency. Privacy is Contextual. Privacy is the Expectation. GitHub is dedicated to safeguarding your right to privacy by ensuring all employees uphold privacy standards and address any misuse of personal information. GitHub provides clear tools and choices to give you control over your privacy, is transparent about data usage, and uses your data to enhance your experience. GitHub processes information based on its context, respecting local privacy laws and advocating for your privacy rights. Privacy is built into everything GitHub does from the start, fostering trust with users, customers, and partners through a strong commitment to privacy. GitHub is committed to the advancement of safe, secure, and trustworthy AI. We believe in the power of AI to enhance efficiency and innovation across the software development life cycle to increase developer happiness. From GitHub Copilot to hosting open source models, GitHub continues to ensure that AI advancements are accessible and beneficial to all. GitHub follows Microsoft's Responsible AI Standard in designing, building, and testing its AI systems in our products. The Microsoft RAI Standard has six principles–accountability, transparency, fairness, reliability & safety, privacy & security, and inclusiveness–that product teams consider in order to responsibly develop and deploy generative AI systems. These principles align with our commitment to develop safe, secure, and trustworthy AI systems. Furthermore, the implementation of these six principles align with the National Institute for Standards and Technology (NIST) AI Risk Management Framework (RMF) – govern, map, measure, and manage. Within GitHub, our Responsible AI Champions help map, measure, and manage risks associated with using generative AI in coding. GitHub has completed a Responsible AI Impact Assessment as well as security and privacy reviews to map different risks associated with its AI products. You can learn more by reading the Responsible AI Transparency Report. The report includes a case study on how GitHub mapped and managed key risks and measured the effectiveness of GitHub Copilot and Copilot Chat on developer productivity. At the heart of our work is a deep respect for your privacy, guiding the decisions we make. We prioritize your trust, control, and the context of your data to ensure your rights are always protected. GitHub has four privacy principles: Privacy Protects People. Privacy requires Trust, Control, and Transparency. Privacy is Contextual. Privacy is the Expectation. GitHub is dedicated to safeguarding your right to privacy by ensuring all employees uphold privacy standards and address any misuse of personal information. GitHub provides clear tools and choices to give you control over your privacy, is transparent about data usage, and uses your data to enhance your experience. GitHub processes information based on its context, respecting local privacy laws and advocating for your privacy rights. Privacy is built into everything GitHub does from the start, fostering trust with users, customers, and partners through a strong commitment to privacy. Site-wide Links Get tips, technical guides, and best practices. Twice a month. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Poseidon] | [TOKENS: 12459] |
Contents Poseidon Poseidon (/pəˈsaɪdən, pɒ-, poʊ-/; Ancient Greek: Ποσειδῶν, romanised: Poseidôn) is one of the twelve Olympians in ancient Greek religion and mythology, presiding over the sea, storms, earthquakes and horses. He was the protector of seafarers and the guardian of many Hellenic cities and colonies. In pre-Olympian Bronze Age Greece, Poseidon was venerated as a chief deity at Pylos and Thebes, with the cult title "earth shaker"; in the myths of isolated Arcadia, he is related to Demeter and Despoina and was venerated as a horse, and as a god of the waters. Poseidon maintained both associations among most Greeks: he was regarded as the tamer or father of horses, who, with a strike of his trident, created springs (the terms for horses and springs are related in the Greek language). His Roman equivalent is Neptune. Homer and Hesiod suggest that Poseidon became lord of the sea when, following the overthrow of his father Cronus, the world was divided by lot among Cronus' three sons; Zeus was given the sky, Hades the underworld, and Poseidon the sea, with the Earth and Mount Olympus belonging to all three. In Plato's Timaeus and Critias, the legendary island of Atlantis was Poseidon's domain. In Homer's Iliad, Poseidon supports the Greeks against the Trojans during the Trojan War. In the Odyssey, during the sea-voyage from Troy back home to Ithaca, the Greek hero Odysseus provokes Poseidon's fury by blinding his son, the Cyclops Polyphemus, resulting in Poseidon punishing him with storms, causing the complete loss of his ship and numerous of his companions, and delaying his return by ten years. Poseidon is famous for his contests with other deities for winning the patronage of the city. According to legend, Athena became the patron goddess of the city of Athens after a competition with Poseidon, though he remained on the Acropolis in the form of his surrogate, Erechtheus. After the fight, Poseidon sent a monstrous flood to the Attic plain to punish the Athenians for not choosing him. In similar competitions with other deities in different cities, he causes devastating floods when he loses. Poseidon is a horrifying and avenging god and must be honoured even when he is not the patron deity of the city. Some scholars suggested that Poseidon was probably a Pelasgian god or a god of the Minyans. However it is possible that Poseidon, like Zeus, was a common god of all Greeks from the beginning. Etymology The earliest attested occurrence of the name, written in Linear B, is 𐀡𐀮𐀅𐀃 Po-se-da-o or 𐀡𐀮𐀅𐀺𐀚 Po-se-da-wo-ne, which correspond to Ποσειδάων (Poseidaōn) and Ποσειδάϝoνος (Poseidawonos) in Mycenean Greek; in Homeric Greek, it appears as Ποσιδάων (Posidaōn); in Aeolic, as Ποτε(ι)δάων (Pote(i)daōn); in Doric, as Ποτειδάν (Poteidan) and Ποτειδᾶς (Poteidas); in Arcadic, as Ποσoιδᾱν (Posoidan). In inscriptions with Laconic style from Tainaron, Helos and Thuria as Ποὁιδάν (Pohoidan), indicating that the Dorians took the name from the older population. The form Ποτειδάϝων (Poteidawōn) appears in Corinth. The origins of the name "Poseidon" are unclear and the possible etymologies are contradictive among the scholars. One theory breaks it down into an element meaning "husband" or "lord" (Greek πόσις (posis), from PIE *pótis) and another element meaning "earth" (δᾶ (da), Doric for γῆ (gē)), producing something like lord or spouse of Da, i.e. of the earth; this would link him with Demeter, "Earth-mother". Burkert finds that "the second element δᾶ- remains hopelessly ambiguous" and finds a "husband of Earth" reading "quite impossible to prove". According to Beekes in Etymological Dictionary of Greek, "there is no indication that δᾶ means 'earth'", although the root da appears in the Linear B inscription E-ne-si-da-o-ne, "earth-shaker". Another theory interprets the second element as related to the (presumed) Doric word *δᾶϝον dâwon, "water", Proto-Indo-European *dah₂- "water" or *dʰenh₂- "to run, flow", Sanskrit दन् dā́-nu- "fluid, drop, dew" and names of rivers such as Danube (< *Danuvius) or Don. This would make *Posei-dawōn into the master of waters. Plato in his dialogue Cratylus gives two traditional etymologies: either the sea restrained Poseidon when walking as a "foot-bond" (ποσίδεσμον), or he "knew many things" (πολλά εἰδότος or πολλά εἰδῶν). Beekes suggests that the word has probably a Pre-Greek origin. The original form was probably the Mycenean Greek Ποτ(σ)ειδάϝων (Pot(s)eidawōn). "The intervocalic aspiration suggests a Pre-Greek (Pelasgian) origin rather than an Indoeuropean one". Bronze Age Greece If surviving Linear B clay tablets can be trusted, the names po-se-da-wo-ne and Po-se-da-o ("Poseidon") occur with greater frequency than does di-u-ja ("Zeus"). A feminine variant, po-se-de-ia, is also found, indicating a lost consort goddess, in effect the precursor of Amphitrite.[original research?] Poseidon was the chief god at Pylos. The title wa-na-ka appears in the inscriptions. Poseidon was identified with the title wanax from the Homeric era to classical Greece. The title didn't mean only king, but also protector. Wanax had chthonic aspects, and he was closely associated with Poseidon, who had the title "Lord of the Underworld". The chthonic nature of Poseidon is also indicated by his title E-ne-si-da-o-ne (Earth-shaker) in Mycenean Knossos and Pylos. Through Homer the epithet was also used in classical Greece. (ennosigaios, ennosidas). Po-tini-ja (potnia: lady or mistress) was the chief goddess at Pylos and she was closely associated with Poseidon. She was the Mycenean goddess of nature and Poseidon—Wanax is one from the gods who may be considered her "male paredros". The earth shaker received offerings in the cave of the goddess of childbirth Eileithyia at Amnisos in Crete. Poseidon is allied with Potnia and the divine child. Wa-na-ssa (anassa:queen or lady) appears in the inscriptions usually in plural. (Wa-na-ssoi). The dual number is common in Indoeuropean grammar (usually for chthonic deities like the Erinyes) and the duality was used for Demeter and Persephone in classical Greece (the double named goddesses). Potnia and wanassa refer to identical deities or two aspects of the same deity. E-ri-nu (Erinys) is attested in the inscriptions. In some ancient cults Erinys is related to Poseidon and her name is an epithet of Demeter. It is possible that Demeter appears as Da-ma-te in a Linear B inscription (PN EN 609), however the interpretation is still under dispute. Si-to Po-tini-ja is probably related with Demeter as goddess of grain. Tablets from Pylos record sacrificial goods destined for "the Two ladies and the Lord" (or "to the Two Queens and the King": wa-na-soi, wa-na-ka-te). Wa-na-ssoi may be related with Demeter and Persephone, or their precursors, goddesses who were not associated with Poseidon in later periods. During the Mycenean period, the ancestral male gods of the Myceneans were probably not represented in human forms, and the information given by the tablets found at Pylos and Knossos is insufficient. Poseidon was the chief deity at Pylos and Thebes. He is identified with Anax and he carried the title "Master of the Underworld".[citation needed] Anax had probably a cult associated with the protection of the palace. In Acrocorinth he was worshipped as Poseidon Anax during the Mycenean age. In the city there was the famous spring Peirene which in a myth is related to the winged horse Pegasus. In Attica there was a cult of Anax heroes who was connected to Poseidon. A cult title of Poseidon was "earth-shaker" and in Knossos he was worshipped together with the goddess Eleithyia who was related to the annual birth of the divine child. Potnia was the Mycenean goddess of nature and she was the consort of Poseidon at Pylos. She is mentioned together with bucrania in decorated jugs and he was associated with the animals and especially to the bull. In Athens Poseidon was an inland god who created the salt-sea Erecthēιs (Ερεχθηίς), "sea of Erechtheus". In Acropolis his cult was superimposed on the cult of the local ancestral figure Erechtheus. In Athens and Asine he was worshipped in the house of the king during the Mycenean period. The bull was the favourite animal for sacrifices and it seems that horses were rarely used during the burial of the Mycenean leaders. In the Arcadian myths, Poseidon is related to Demeter and Despoina and he was worshipped with the surname Hippios in many Arcadian cities. At Thelpusa and Phigalia there were sister worships which are very important for the study of primitive religions. In these cults Demeter and Poseidon were chthonic divinities of the underworld. Near Thelpusa the river Ladon descended to the sunctuary of Demeter Erinys (Demeter-Fury). During her wandering in search of her daughter Demeter changed into a mare to avoid Poseidon. Poseidon took the form of a stallion and after their mating she gave birth to a daughter whose name was not allowed to be told to the unitiated and a horse called Arion (very swift). Her daughter obviously had the shape of a mare too. At first Demeter became angry and she was given the surname Erinys (fury) by the Thelpusians. The Erinyes were deities of vengeance, and Erinys had a similar function with the goddess Dike (Justice). In the very old myth of Thelpusa Demeter-Erinys and Poseidon are divinities of the underworld in a pre-mythic period. Poseidon appears as a horse. In Greek folklore the horses had chthonic associations and it was believed that they could create springs. In European folklore the water-creatures or water-spirits appear with the shape of a horse or a bull. In Greece the river god Acheloos is represented like a bull or a man-bull. Many people when sacrificed to Demeter should make a premilinary sacrifice to Acheloos At Phigalia Demeter had a sanctuary in a cavern and she was given the surname Melaina (black). The goddess was related to the black undeworld. In a similar myth Poseidon appears as horse and Demeter gives birth to a daughter whose name was not allowed to be told to the unitiated (At Lycosura her daughter was called Despoina). Demeter angry with Poseidon put on a black dressing and shut herself in the cavern. When the fruits of the earth were perished, Zeus sent the Moirai to Demeter who listened to them and led aside her wrath. In this cult we have traces of a very old cult of Demeter and Poseidon as deities of the underworld. In another Arcadian myth when Rhea had given birth to Poseidon, she told Cronus that she had given birth to a horse, and gave him a foal to swallow instead of the child. In the Homeric Hymn Demeter puts a dark mourning robe around her shoulders as a sign of her sorrow. Demeter's mare-form was worshipped into historical times. The xoanon of Melaina at Phigalia shows how the local cult interpreted her, as goddess of nature. A Medusa type with a horse's head with snaky hair, holding a dove and a dolphin, probably representing her power over air and water. The myth of Poseidon appearing as a horse and mating with Demeter was not localized in Arcadia. At Haliartos in Boeotia near Thebes Poseidon appears as stallion. He mates with Erinys near the spring of Tilpousa and she gives birth to the faboulous horse Arion. At Tilpusa we have a very old cult of the chthonic deities Erinys and Poseidon. The water-god Poseidon appears as a horse which seems to represent the water-spirit and Erinys is probably the personification of a revenging earth-spirit. From earlier times at Delphi Poseidon was joined in a religious union with the earth-goddess Ge. She is represented as a snake which is a form of the earth-spirit. In the Theogony of Hesiod Poseidon once slept with the monstrous Medousa near the mountain Helikon. She conceived the winged horse Pegasus who sprang out of her body when Perseus cut off her head. Pegasus stuck the ground with his hoof and created the famous spring Hippocrene near Helikon. Praxidicai were female deities of judicial punishment worshipped in the region of Haliartos in the historical times. Ttheir origin is probably the same with Erinys. Their images depicted only the heads of the goddesses probably a representation of the earth goddess emerging from the ground. Praxidice is and epithet of Persephone in the Orphic Hymn. Persephone is sometimes depicted with her head emerging from the ground. Origins During the Mycenean period Poseidon was worshipped in several regions in Greece. At Pylos and some other cities he was a god of the underworld (Lord of the Underworld) and his cult was related to the protection of the palace. He carried the title anax, king or protector. His consort potnia, lady or mistress, was the Mycenean goddess of nature. Her main aspects were birth and vegetation. Poseidon had the title "Enesidaon" (earth-shaker) and in Crete he was associated with the goddess of childbirth Eleithyia. Through Homer the Mycenean titles were also used in classical Greece with similar meaning. He was identified with anax and he carried the epithets "Ennosigaios" and "Ennosidas" (earth-shaker). Potnia was a title which accompanied female goddesses. The goddess of nature survived in the Eleusinian cult, where the following words were uttered: "Mighty Potnia bore a strong son". In the heavily sea-dependent Mycenaean culture, there is not sufficient evidence that Poseidon was connected with the sea; it is unclear whether "Posedeia" was a sea-goddess. The Greeks invaders came from far inland and they were not familiarized with the sea. In the primitive Boeotian and Arcadian myths Poseidon, the god of the underworld, appears as a horse and he is mating with the earth goddess. The earth goddess is called Erinys or Demeter and she gives birth to the fabulous horse Arion and the unnamed daughter Despoina. The horse represents the divine spirit (numen) and is related to the liquid element and the underworld. In Greek folklore the horse is associated with the underworld and it was believed that it had the ability to create springs. In the European folklore the water-spirit appears with the shape of a horse or a bull. In Greece the river god Acheloos is represented as a bull or a man-bull. Burkert suggests that the Hellenic cult of Poseidon as a horse god may be connected to the introduction of the horse and war-chariot from Anatolia to Greece around 1600 BC. In the Boeotian myth Poseidon is the water-god and Erinys is a goddess of the underworld. She is probably the personification of a revenging earth spirit and it seems that she had a similar function with the goddess Dike (Justice). At the spring "Tilpousa" she gives birth to Arion. In the Arcadian myth Poseidon Hippios (horse) is mating with the mare-Demeter. At Thelpousa Demeter-Erinys gives birth to Arion and to an unnamable daughter who has the shape of a mare. In some neighbour cults the daughter was called Despoina (mistress). The theriomorphic form of gods seems to be local in Arcadia in an old religion associated with xoana. According to some theories Poseidon was a Pelasgian god or a god of the Minyans. Traditionally the Minyans are considered Pelasgians and they lived in Thessaly and Boeotia. In Thessaly (Pelasgiotis) there was a close relation to the horses. Poseidon created the first horse Skyphios hitting a rock with his trident and managed in the same way to drain the valley of Tempe. The Thessalians were famous charioteers. Some of the oldest Greek myths appear in Boeotia. In ancient cults Poseidon was worshipped as a horse. The horse Arion was a sire of Poseidon-horse with Erinys and the winged horse Pegasus a sire of Poseidon foaled by Medousa. At Onchestos he had an old famous festival which included horseracing. However it is possible that Poseidon like Zeus was a common god of all Greeks from the beginning. It is possible that the Greeks did not bring with them other gods except Zeus, Eos, and the Dioskouroi. The Pelasgian god probably represented the fertilising power of water, and then he was considered god of the sea. As the sea encircles and holds the earth in its position, Poseidon is the god who holds the earth and who has the ability to shake the earth. The primeval water who encircled the earth ( Oceanus) is the origin of all rivers and springs. They are children of Oceanus and Tethys. Farnell suggested that Poseidon was originally the god of the Minyans who occupied Thessaly and Boeotia. There is a similarity between the Boeotian and Arcadian myths and especially between the myths which represent the god of the waters Poseidon as a horse. The mythical horse Arion appears in both regions. The offspring of Poseidon winged horse Pegasus creates famous springs near Helikon and at Troizen. Some springs of Poseidon have similar names in Boeotia and Peloponnese. It is possible that the name of Poseidon Helikonios in Boeotia whose fest included horseracing derives from the mountain Helikon. The Minyans had trade contacts with Mycenean Pylos and the Achaeans adopted the cult of Poseidon Helikonios. The cult spread in Peloponnese and then to Ionia when the Achaeans migrated to Asia Minor. Nilsson suggested that Poseidon was probably a common god of all Greeks from the beginning. The Greeks occupied Thessaly, Boeotia and Peloponnese during the Bronze Age. In all these regions Poseidon was the god of the horses. The origin of his cult was Peloponnese and he was the inland god of the Achaeans, the god of the "horses" and the "earthquakes". When the Achaeans migrated to Ionia there was a transition to regarding Poseidon as the god of the sea because the Ionians were sea-dependent. With no doubt he was originally the god of the waters. The Greeks believed that the cause of the earthquakes was the erosion of the rocks by the waters, by the rivers in Peloponnese which they saw to disappear into the earth and then to burst out again. The god of the waters became the "earth-shaker". This is what the natural philosophers Thales Anaximenes and Aristotle believed and could not be different from the folk belief. In the Greek legends Arethusa and the river Alpheus traversed underground under the sea and reappeared at Ortygia. In any case, the early importance of Poseidon can still be glimpsed in Homer's Odyssey, where Poseidon rather than Zeus is the major mover of events. In Homer, Poseidon is the master of the sea. He is described as a majestic, scary, and avenging monarch of the sea. Cult The worship of Poseidon was extended all over Greece and southern Italy, but he was specially honoured in Peloponnese which is called "the residence of Poseidon" and in the Ionic cities. The significance of his cult is indicated by the names of cities like Poteidaia in the Chalkidiki peninsula and Poseidonia (Paestum), a Greek colony in Italy. Poseidion is a frequent Greek placename along coastlines and the name of a Greek colony at the Syrian coast. In Ionia his cult was introduced by Achaean colonists from Greece in the 11th century BC. Traditionally the colonists came from Pylos where Poseidon was the principal god of the city. The god had a famous temple near the mountain Mycale. The month Poseidaon is the month of the winter-storms. The name of the month was used in Ionic territories, in Athens, in the islands of the Aegean and in the cities of Asia Minor. At Lesbos and Epidauros the month was called Poseidios. During this month Poseidon was worshipped as the "master of the sea" in a bright cult. Poseidon was a major civic god of several cities: in Athens, he was second only to Athena in importance, while in Corinth and many cities of Ionia and Magna Graecia he was the chief god of the polis. Many fests of Poseidon included athletic competitions and horseracing. In Corinth his cult was related to the Isthmian games. In Arcadia his cult was related to the games "Hippocrateia" and at Sparta he had a temple near an Hippodrome. In Onchestos of Boeotia horseracing was a part of the athletic games in honour of the god. Poseidon was considered a symbol of unity. The Panionia the festival of all Ionians near Mycale were celebrated in honour of Poseidon Helikonios and was the place of meeting of the Ionian League. He was the patron god of the Amphictiony of Kalaureia. At Onchestos of Boeotia he was worshipped as Poseidon Helikonios. His sanctuary became the place of meeting of the second Boeotian league. At Helike of Achaea there was the famous temple of Poseidon Helikonios, which was the place of meeting of the Achaean League. The "master of the sea" creates clouds and storms, but he is also the protector of the sailors. He has the ability to calm the sea for a good voyage and save those who are in danger. He was worshipped with the surname "savior" as the protector of the seafarers and the fishermen. He is the "earthshaker", however he is also the protector against the earthquakes. In some cults he was worshipped as the "bringer of safety" or "protector of the house and the foundations". The god was considered the creator of the first horse, and it was believed that he taught men the art of taming horses. He was depicted on horseback, or riding in a chariot drawn by two or four horses. He had a lot of temples in Arcadia, with the surname Hippios (of the horse) and he was also transformed into a horse to seduce Demeter. Being the god of waters, Poseidon is related to the primeval water which encircles the earth (Oceanus), who is the father of all rivers and springs. He can create springs with the strike of his trident. He was worshipped as "ruler of the springs" and "leader of the nymphs" In Thessaly it was believed that he drained the area cutting the rocks of Tempe with his trident. In Greek folklore the horse can also create springs . As god of the sea Poseidon was also god of fishing and especially of sea-fishing. Tuna was offered to him by the fishermen during the festal meal for the protection of the nets . Tuna and later dolphin was his attribute. He was worshipped in many islands and cities by the coast. At Corcyra a roaring bull near the sea-shore guaranteed a good fishing. The devastating storm of Poseidon is related to fishermen and they poured drink offerings to Poseidon -savior into the sea. The god of inland waters is very close to vegetation and Poseidon was worshipped in many cities as god of vegetation. Haloa in Athens was a fest of vegetation. The Protrygaia, a wine-fest seem to belong to Dionysus and Poseidon. In several cities Poseidon was worshipped in relation to the genealogy and the phratry. At Tinos he was worshipped as a healer-god, probably a forerunner of the famous Evangelistria. The bull is related to Poseidon mainly in Ionia. The sacrifice of a bull offered to Poseidon is mentioned by Homer in an Ionic festival (Panionia). The sacrifices offered to Poseidon consisted of black and white bulls which were killed or thrown into the sea. Boars and rams were also used and in Argolis horses were thrown into a well as a sacrifice to him. In his benign aspect, Poseidon was seen as creating new islands and offering calm seas. When offended or ignored, he supposedly struck the ground with his trident and caused chaotic springs, earthquakes, drownings and shipwrecks. Sailors prayed to Poseidon for a safe voyage, sometimes drowning horses as a sacrifice; in this way, according to a fragmentary papyrus, Alexander the Great paused at the Syrian seashore before the climactic battle of Issus, and resorted to prayers, "invoking Poseidon the sea-god, for whom he ordered a four-horse chariot to be cast into the waves". According to Pausanias, Poseidon was one of the caretakers of the oracle at Delphi before Olympian Apollo took it over. Apollo and Poseidon worked closely in many realms: in colonization, for example, Delphic Apollo provided the authorization to go out and settle, while Poseidon watched over the colonists on their way, and provided the lustral water for the foundation-sacrifice. At one time Delphi belonged to him in common with Ge, but Apollo gave him the psychopompeion Kalaureia as a compensation for it. Xenophon's Anabasis describes a group of Spartan soldiers in 400–399 BC singing to Poseidon a paean—a kind of hymn normally sung for Apollo. Like Dionysus, who inflamed the maenads, Poseidon also caused certain forms of mental disturbance. A Hippocratic text of ca 400 BC, On the Sacred Disease says that he was blamed for certain types of epilepsy. Poseidon is still worshipped today in modern Hellenic religion, among other Greek gods. The worship of Greek gods has been recognized by the Greek government since 2017. Poseidon had a variety of roles, duties and attributes. He is a separate deity from the oldest Greek god of the sea Pontus. In Athens his name is superimposed οn the name of the non-Greek god Erechtheus Ἑρεχθεύς (Poseidon Erechtheus). In the Iliad, he is the lord of the sea and his golden palace is built in Aegai, in the depth of the sea. His significance is indicated by his titles Eurykreion (Εὐρυκρείων) "wide-ruling", an epithet also applied to Agamemnon and Helikonios anax (Ἑλικώνιος ἄναξ), "lord of Helicon or Helike" In Helike of Achaia he was specially honoured. Anax is identified in Mycenaean Greek (Linear B) as wa-na-ka, a title of Poseidon as king of the underworld. Aeschylus uses also the epithet anax and Pindar the epithet Eurymedon (Εὐρυμέδων) "widely ruling". Some of the epithets (or adjectives) applied to him like Enosigaios (Ἐνοσίγαιος), Enosichthon (Ἐνοσίχθων) (Homer) and Ennosidas (Ἐννοσίδας) (Pindar), mean "earth shaker". These epithets indicate his chthonic nature, and have an older evidence of use, as it is identified in Linear B, as 𐀁𐀚𐀯𐀅𐀃𐀚, E-ne-si-da-o-ne. Other epithets that relate him with the earthquakes are Gaieochos (Γαιήοχος) and Seisichthon (Σεισίχθων) The god who causes the earthquakes is also the protector against them, and he had the epithets Themeliouchos (Θεμελιούχος) "upholding the foundations", Asphaleios (Ἀσφάλειος) "securer, protector" with a temple at Tainaron. Pausanias describes a sanctuary of Poseidon near Sparta beside the shrine of Alcon, where he had the surname Domatites (Δωματίτης), "of the house" He also had the epithet Gaeeochus (Γαιήοχος), meaning "holder of the earth". Homer uses for Poseidon the title Kyanochaites (Κυανοχαίτης), "dark-haired, dark blue of the sea". Epithets like Pelagios (Πελάγιος) "of the open sea", Aegeus (Αἰγαίος), "of the high sea" in the town of Aegae in Euboea, where he had a magnificent temple upon a hill, Pontomedon (Ποντομέδων)," lord of the sea" (Pindar, Aeschylus) and Kymothales (Κυμοθαλής), "abounding with waves", indicate that Poseidon was regarded as holding sway over the sea. Other epithets that relate him with the sea are, Porthmios (Πόρθμιος), "of strait, narrow sea" at Karpathos, Epactaeus (Ἐπακταῖος) "god worshipped on the coast", in Samos, Alidoupos, (Ἀλίδουπος) "sea resounding". The master of the sea who can cause devastating storms is also the protector of seafarers and he was given the epithet sōtēr (Σωτήρ), "savior". His symbol is the trident and he has the epithet Eutriaina (Εὐτρίαινα), "with goodly trident" (Pindar). The god of the sea is also the god of fishing, and tuna was his attribute. At Lampsacus they offered fishes to Poseidon and he had the epithet phytalmios (φυτάλμιος) His epithet Phykios (Φύκιος), "god of seaweeds" at Mykonos, seems to be related with fishing. He had a fest where women were not allowed, with special offers also to Poseidon Temenites (Τεμενίτης) "related to an official domain ". At the same day they made offers to Demeter Chloe therefore Poseidon was the promotor of vegetation. He had the epithet phytalmios (φυτάλμιος) at Myconos, Troizen, Megara and Rhodes, comparable with Ptorthios (Πτόρθιος) at Chalcis. Poseidon had a close association with horses. He is known under the epithet Hippios (Ἵππειος), "of a horse or horses" usually in Arcadia. He had temples at Lycosura, Mantineia, Methydrium, Pheneos, Pallandion. At Lycosura he is related with the cult of Despoina. The modern sanctuary near Mantineia was built by Emperor Hadrian. In Athens on the hill of horses there was the altar of Poseidon Hippios and Athena Hippia. The temple of Poseidon was destroyed by Antigonus when he attacked Attica. He is usually the tamer of horses (Damaios,Δαμαίος at Corinth), and the tender of horses Hippokourios Ἱπποκούριος) at Sparta, where he had a sanctuary near the sanctuary of Artemis Aiginea. In some myths he is the father of horses, either by spilling his seed upon a rock or by mating with a creature who then gave birth to the first horse. In Thessaly he had the title Petraios Πετραἵος, "of the rocks". He hit a rock and the first horse "Skyphios" appeared. He was closely related with the springs, and with the strike of his trident, he created springs. He had the epithets Krenouchos (Κρηνούχος), "ruling over springs", and nymphagetes (Νυμφαγέτης) "leader of the nymphs" On the Acropolis of Athens he created the saltspring Sea of Erechtheus (Ἐρεχθηίς θάλασσα). Many springs like Hippocrene and Aganippe in Helikon are related with the word horse (hippos). (also Glukippe, Hyperippe). He is the father of Pegasus, whose name is derived from πηγή, (pēgē) "spring". Epithets like Genesios Γενέσιος at Lerna Genethlios (Γενέθλιος) "of the race or family" Phratrios (Φράτριος) "of the brotherhood", and Patrigenios (Πατριγένειος) indicate his relation with the genealogy trees and the brotherhood. Other epithets of Poseidon in local cults are Epoptes (Ἐπόπτης), "overseer, watcher" at Megalopolis, Empylios (Ἐμπύλιος), "at the gate " at Thebes, Kronios (Κρόνιος) (Pindar) and semnos (σεμνός), "august, holy" (Sophocles). Some of Poseidon's epithets are related to festivals and athletic games including racing. At Corinth the Isthmian games was an athletic and music festival in honour of the god who had the epithet Isthmios (Ἴσθμιος). At Sparta there was the race in Gaiaochō. (ἐν Γαιαόχῳ) Poseidon Gaiēochos (Γαιήοχος) had a temple near the city beside an Hippodrome. At Mantineia and Pallandion in Arcadia the Hippokrateia (Ἱπποκράτεια) were athletic games in honour of Poseidon Hippeios (Ιππειος). At Ephesus there was a fest "Tavria" and he had the epithet Taureios (Tαύρειος), "related with the bull". Many festivals all over Greece, in the Ionic cities and in Italy were celebrated in honour of Poseidon. Temples of Poseidon The Corinthians are considered to be the inventors of the Doric order. However Corinth was completely destroyed and rebuilt and there is not sufficient evidence for the existence of earliest Doric Greek temples in the city. A building constructed in early 7th century BC c.690-650 BC at Isthmia near Corinth which was later dedicated to Poseidon, is considered a pioneering building featuring Doric architecture. It seems that the first temple with pure Doric elements was built with the aid of Corinthians at Thermon in Aetolia in the middle of 7th century BC century. c.640-630 BC. It was a peripteral narrow wooden structure dedicated to Apollo, It measured 12.13 X38.23 m at the stylobate and the number of pteron columns was 5X15. In the earlier temples the peripteral colonnade is treated with a freedom unknown to later Doric architects. This is in part an especially western feature (in Italy) because the hexastyle scheme was adopted as in the temple of Poseidon at Taranto and the second temple of Hera at Paestum (traditionally named temple of Poseidon). In the earlier temples where the number of the columns in the porch is odd, so are the columns of the pteron facade. In such temples the side ptera are approximately the width of one or two intercolumniations. In the hexastyle scheme like the temple of Poseidon at Sounion, there are normally two or four columns in the porch and the side ptera are approximately the width of one intercolumniation. In Doric early work the distance between column and column differs on the fronts and on the flanks and this can be observed in the temple of Poseidon at Kalaureia and in Basilica at Paestum. After the 6th century the rule in Doric is an approximate equality of intercolumniations and it can be observed in the temple of Poseidon at Sounion, where there is a slight difference. Mythology In the standard version, Poseidon was born to the Titans Cronus and Rhea, the fifth child out of six, born after Hestia, Demeter, Hera and Hades in that order. Because Poseidon's father was afraid that one of his children would overthrow him like he had done to his own father, Cronus devoured each infant as soon as they were born. Poseidon was the last one to suffer this fate before Rhea decided to deceive Cronus and whisk the sixth child, Zeus, away to safety, after offering Cronus a rock wrapped in a blanket to eat. Once Zeus was grown, he gave his father a powerful emetic that made him gorge up the children he had eaten. The five children emerged from their father's belly in reverse order, making Poseidon both the second youngest child and the second oldest at the same time. Armed with a trident forged for him by the Cyclopes, Poseidon with his siblings and other divine allies defeated the Titans and became rulers in their place. According to Homer and Apollodorus, Zeus, Poseidon and the third brother Hades then divided the world between them by drawing lots; Zeus got the sky, Poseidon the sea, and Hades the Underworld. In a rarer - and later- version, Poseidon avoided being devoured by his father as his mother Rhea saved him in the same manner she did Zeus, by offering Cronus a foal instead, claiming she had given birth to a horse instead of a god, while she had actually laid the child in a flock. Rhea entrusted her infant to a spring nymph. When Cronus demanded the child, the nymph Arne denied having him, and her spring thereafter was called Arne (which bears resemblance to the Greek word for 'deny'). In another tale, Rhea gave Poseidon to the Telchines, ancient inhabitants of the island of Rhodes; Capheira, an Oceanid nymph, became the young god's nurse. As Poseidon grew, he fell in love with Halia, the beautiful sister of the Telchines, and fathered six sons and one daughter, Rhodos, on her. By that time Aphrodite, the goddess of love, had been born and risen from the sea, and attempted to make a stop at Rhodes on her way to Cyprus. Poseidon and Halia's sons denied her hospitality, so Aphrodite cursed them to fall in love and rape Halia. After they had done so, Poseidon made them sink below the sea. In Homer's Odyssey, Poseidon has a home in Aegae. Poseidon broke off a piece of the island of Kos called Nisyros, and threw it on top of Polybotes (Strabo also relates the story of Polybotes buried under Nisyros but adds that some say Polybotes lies under Kos instead). Athena became the patron goddess of the city of Athens after a competition with Poseidon. Yet Poseidon remained a numinous presence on the Acropolis in the form of his surrogate, Erechtheus. At the dissolution festival at the end of the year in the Athenian calendar, the Skira, the priests of Athena and the priest of Poseidon would process under canopies to Eleusis. They agreed that each would give the Athenians one gift and the Athenians would choose whichever gift they preferred. Poseidon struck the ground with his trident and a spring sprang up; the water was salty and not very useful, but represented his true gift - the access to trade. Athens at its height was a significant sea power, defeating the Persian fleet at the Battle of Salamis.[citation needed] For her part, Athena offered an olive tree. The Athenians or their king, Cecrops, accepted the olive tree and along with it Athena as their patron, for the olive tree brought wood, oil and food. After the fight, infuriated at his loss, Poseidon sent a monstrous flood to the Attic Plain, to punish the Athenians for not choosing him. The depression made by Poseidon's trident and filled with salt water was surrounded by the northern hall of the Erechtheum, remaining open to the air. Burkert noted :"In cult, Poseidon was identified with Erechtheus" and "the myth turns this into a temporal-causal sequence: in his anger at losing, Poseidon led his son Eumolpus against Athens and killed Erectheus." It was also said that Poseidon in his anger over his defeat sent one of his sons, Halirrhothius, to cut down Athena's tree gift. But as Halirrhothius swung his axe, he missed his aim and it fell in himself, killing him instantly. Poseidon in fury accused Ares of murder, and the matter was eventually settled on the Areopagus ("hill of Ares") in favour of Ares, which was thereafter named after the event. In other versions, Halirrhothius raped Alcippe, Ares's daughter, so Ares slew him. Poseidon was enraged over the murder of his son, and Ares was thus held in hold, which eventually acquitted him. The contest of Athena and Poseidon was the subject of the reliefs on the western pediment of the Parthenon, the first sight that greeted the arriving visitor. The Corinthians had a similar story to the foundations of Athens, about their own city Corinth. According to the myth, Helios and Poseidon clashed, both desiring to make the city their own. Their dispute was brought to one of the Hecatoncheires, Briareos, an elder god, who was thus tasked to settle the fight between the two gods. Briareus decided to award the Acrocorinth to Helios, while to Poseidon he gave the isthmus of Corinth. In this tale, Helios and Poseidon are supposed to represent fire versus water. Helios, as the sun god, received the area that is closest to the sky, while Poseidon, who is the sea god, got the isthmus by the sea. At another time, Poseidon came to an agreement with the goddess Leto that he would give her the island of Delos, the birthplace of her twins Artemis and Apollo, in exchange for the island of Calauria; he also exchanged Delphi for Taenarum with Apollo. A temple of Poseidon stood at Calauria during ancient times. Poseidon came to dispute with his sister Hera over the city of Argos. A local king was chosen to settle the matter, Phoroneus, and he decided to award the city to Hera, who then became its patron goddess. Poseidon was enraged, and sent a drought to plague the city. One day, as an Argive woman named Amymone went out in search of water, came upon a satyr who tried to rape her. Amymone prayed to Poseidon for help, and he scared the satyr away with his trident. After Poseidon rescued Amymone from the lecherous satyr he fathered a child on her, Nauplius. Poseidon fathered the hero Theseus with the Troezenian princess Aethra. Theseus was also said to be the son of Aegeus, the king of Athens, who slept with Aethra on the very same night. Thus Theseus's origins included both the human and the divine element. Meanwhile, in Crete, Zeus's son Minos asked for Poseidon's help in order to certify his claim on the throne of Crete. Poseidon offered Minos a splendid white bull, with the understanding that he was to sacrifice the bull to Poseidon later. The Cretans were so impressed with the bull and the divine sign itself that Minos was declared king of Crete. But wishing to keep the beautiful animal for himself, Minos instead sacrificed an ordinary bull to the sea-god instead of the agreed upon one. Poseidon, enraged, caused Minos's wife, Pasiphae, to fall in love with the bull; their coupling produced the Minotaur, a half-bull half-human creature who fed on human flesh. Minos concealed him within the labyrinth built by Daedalus, and fed to him Athenian men and women he forced Aegeus to send him over. Once Theseus was grown up and recognized by Aegeus as his son, he decided to end the bloody tax Athens had to pay to Crete once and for all, and volunteered to set sail to Crete along with the other Athenian youths who had been chosen to be devoured by the Minotaur. Once he arrived in Crete, Minos insulted Theseus and insisted he was no son of Poseidon; to demonstrate so, he threw his own ring in to the sea, and commanded Theseus to retrieve it, expecting he would not be able to do so. Theseus immediately dove in after it. Dolphins then came as guides and escorted him to the halls of Poseidon's palace, where he was warmly welcomed. He received the ring, and in addition a purple wedding cloak and a crown from the Nereid Amphitrite, to prove his words. Theseus then emerged from the sea and gave the ring to Minos. Theseus killed the Minotaur, and in time succeeded his father Aegeus as king of Athens. By an Amazon he had a son, Hippolytus, while his wife Phaedra (Minos' daughter) gave him two sons. At some point, Poseidon promised three favours to Theseus, and he called upon Poseidon to fulfill one of those when Phaedra falsely accused Hippolytus of forcing himself on her. Theseus, not knowing the truth, asked his father to destroy Hippolytus; Poseidon granted his son's wish, and as Hippolytus was driving by the sea, Poseidon sent a terrifying sea monster to spook the man's horses, which then dragged him to his death. On the Greek side: On the Trojan side: Poseidon and Apollo, having offended Zeus by their rebellion in Hera's scheme, were temporarily stripped of their divine authority and sent to serve King Laomedon of Troy. He had them build huge walls around the city and promised to reward them with his immortal horses, a promise he then refused to fulfill. In vengeance, before the Trojan War, Poseidon sent a sea monster to attack Troy. The monster was later killed by Heracles. Poseidon was said to have had many lovers of both sexes. His consort was Amphitrite, an ancient sea-goddess and nymph, daughter of Nereus and Doris. In one account, attributed to Eratosthenes, Poseidon wished to wed Amphitrite, but she fled from him and hid with Atlas. Poseidon sent out many to find her, and it was a dolphin who tracked her down. The dolphin persuaded Amphitrite to accept Poseidon as her husband, and eventually took charge of their wedding. Poseidon then put him among the stars as a reward for his good services. Oppian says that the dolphin betrayed Amphitrite's whereabouts to Poseidon, and he carried off Amphitrite against her will to marry her. Together they had a son named Triton, a merman. A mortal woman named Cleito once lived on an isolated island; Poseidon fell in love with the human mortal and created a dwelling sanctuary at the top of a hill near the middle of the island and surrounded the dwelling with rings of water and land to protect her. She gave birth to five sets of twin boys; the firstborn, Atlas, became the first ruler of Atlantis. Poseidon had an affair with Alope, his granddaughter through Cercyon, his son and King of Eleusis, begetting Hippothoon. Cercyon had his daughter buried alive but Poseidon turned her into the local spring. Poseidon was the father of many heroes. He is thought to have fathered the famed Theseus, Bellerophon, Alebion and Bergion. Not all of Poseidon's children were human, though. His other children include the giants Otos and Ephialtes, the Cyclops Polyphemus and, finally, Amycus was the son of Poseidon and the Bithynian nymph Melia. The philosopher Plato was held by his fellow ancient Greeks to have traced his descent to the sea-God Poseidon through his father Ariston and his mythic predecessors the demigod kings Codrus and Melanthus. Poseidon engaged in homesexual relationships as welll. He took the young Nerites, the son of Nereus and Doris (and thus brother to Amphitrite) as a lover. Nerites was also Poseidon's charioteer, and impressed all marine creatures with his speed. But one day the sun god, Helios, turned Nerites into a shellfish. Aelian, who recorded this tale as told by mariners, says it is not clear why Helios did this, but theorizes he might have been offended somehow, or that he and Poseidon were rivals in love, and Helios wanted Nerites to travel among the constellations instead of the sea-monsters. From the love between Poseidon and Nerites was born Anteros, mutual love. Other male lovers of Poseidon included Pelops and Patroclus. In an archaic myth, Poseidon once pursued Demeter. She spurned his advances, turning herself into a mare so that she could hide in a herd of horses; he saw through the deception and became a stallion, captured and raped her. Their child was a horse, Arion, which was capable of human speech. According to Hesiod's Theogony, Poseidon "lay down in a soft meadow among spring flowers" with the Gorgon Medusa and two offspring, the winged horse Pegasus and the warrior Chrysaor, were born when the hero Perseus cut off Medusa's head.Ovid however says that Medusa was originally a very beautiful maiden whom Poseidon raped inside the temple of Athena. Athena, furious over the sacrilege, changed the beautiful girl into a monster. Elsewhere in the Metamorphoses, Ovid says that Poseidon seduced Medusa in the form of a bird. When Zeus fell in love and pursued the goddess Asteria, she transformed into a quail and flung herself into the sea to escape being raped by him. Poseidon then, equally rapacious, picked up the chase where Zeus had left it and chased Asteria with the aim to force himself on her, so Asteria had to transform for a second time to save herself, this time into a small rocky island named Delos. One day, Poseidon spotted Caenis walking by the seashore, caught her and raped her. Having enjoyed her greatly, he offered her a wish, any wish. Traumatized, Caenis wished to be transformed into a man, so that she would never experience assault again. Poseidon fulfilled her request and changed her into a male warrior, who then took the name Caeneus. A mortal woman named Tyro was married to Cretheus (with whom she had one son, Aeson), but loved Enipeus, a river god. She pursued Enipeus, who refused her advances. One day, Poseidon, filled with lust for Tyro, disguised himself as Enipeus, and from their union were born the heroes Pelias and Neleus, twin boys. Another time Poseidon once fell in love with a Phocian woman, Corone, the daughter of Coronaeus as she was walking along the shore. He attempted to court her, but she rejected him, and ran away. Poseidon then chased her down with the aim to rape her. Athena, witnessing all that, took pity in the girl and changed her into a crow. The following is a list of Poseidon's offspring, by various mothers. Beside each offspring, the earliest source to record the parentage is given, along with the century to which the source (in some cases approximately) dates. Genealogy In literature and art In Greek art, Poseidon lives in a palace on the ocean floor, made of coral and gems. He rides a chariot that is pulled by a hippocampus or by horses that could ride on the sea. He was associated with dolphins and three-pronged fish spears (tridents). A hymn to Poseidon included among the Homeric Hymns is a brief invocation, a seven-line introduction that addresses the god as both "mover of the earth and barren sea, god of the deep who is also lord of Mount Helicon and wide Aegae, and specifies his twofold nature as an Olympian: "a tamer of horses and a saviour of ships". In the Iliad, Poseidon favors the Greeks, and on several occasions takes an active part in the battle against the Trojan forces. However, in Book XX, he rescues Aeneas after the Trojan prince is laid low by Achilles. In the Odyssey, Poseidon is notable for his hatred of Odysseus who blinded the sea-god's son, the Cyclops Polyphemus, resulting in Poseidon punishing him with storms, causing the complete loss of his ship and his numerous of his companions. The enmity of Poseidon prevents Odysseus's return home to Ithaca for ten years. Odysseus is even told, notwithstanding his ultimate safe return, that to placate the wrath of Poseidon will require one more voyage on his part. After Odysseus left the island of Calypso, Poseidon, in anger, let loose all four of the Anemoi to cause a storm and raise great waves in order to attempt to drown him. In the Aeneid, Neptune is still resentful of the wandering Trojans, but is not as vindictive as Juno, and in Book I he rescues the Trojan fleet from the goddess's attempts to wreck it, although his primary motivation for doing this is his annoyance at Juno's having intruded into his domain. In modern culture Due to his status as a Greek god, Poseidon has made multiple appearances in modern and popular culture. Poseidon appeared in the 1963 film Jason and the Argonauts. Poseidon appears in the Percy Jackson & the Olympians novel series, where he is the father of the demigod protagonist Percy Jackson. In the first film adaptation, Percy Jackson & the Olympians: The Lightning Thief, he is portrayed by Kevin McKidd. Poseidon has made multiple appearances in video games, such as in God of War 3 by Sony. In the game, Poseidon appears as a boss for the player to defeat. In the video game Hades, he is a character who will grant "boons". Narrations The following is a (non-exhaustive) list of pre-modern tellings and retellings of myths relating to Poseidon: Gallery See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Native_American_reservations] | [TOKENS: 9356] |
Contents Indian reservation An Indian reservation in the United States is an area of land held and governed by a Native American tribal nation officially recognized by the U.S. federal government. The reservation's government is autonomous but subject to regulations passed by the United States Congress, and is administered by the United States Bureau of Indian Affairs. It is not subject, however, to a state or local government of the U.S. state in which it is located. Some of the country's 574 federally recognized tribes govern more than one of the 326 Indian reservations in the United States, while some share reservations, and others have no reservation at all. Historical piecemeal land allocations under the Dawes Act facilitated sales to non–Native Americans, resulting in some reservations becoming severely fragmented, with pieces of tribal and privately held land being treated as separate enclaves. This intersection of private and public real estate creates significant administrative, political, and legal difficulties. The total area of all reservations is 56,200,000 acres (22,700,000 ha; 87,800 sq mi; 227,000 km2), approximately 2.3% of the total area of the United States and about the size of the state of Idaho. While most reservations are small compared to the average U.S. state, twelve Indian reservations are larger than the state of Rhode Island. The largest reservation, the Navajo Nation Reservation, is similar in size to the state of West Virginia. Reservations are unevenly distributed throughout the country, the majority being situated west of the Mississippi River and occupying lands that were first reserved by treaty (Indian Land Grants) from the public domain. Because recognized Native American nations are subject to federal law, they possess limited tribal sovereignty. Thus, laws within tribal lands may vary from those of the surrounding and adjacent states. For example, these laws can permit casinos on reservations located within states which do not allow gambling. The tribal council generally has jurisdiction over the reservation, not the U.S. state in which it is located, or federal law. Court jurisdiction in Indian country is shared between tribes and the federal government, depending on the tribal affiliation of the parties involved and the specific crime or civil matter. Different reservations have different systems of government, which may or may not replicate the forms of government found outside the reservation. Most Native American reservations were established by the federal government but a small number, mainly in the East, owe their origin to state recognition. The term "reservation" is a legal designation. It comes from the conception of the Native American nations as independent sovereigns at the time the U.S. Constitution was ratified. Thus, early peace treaties (often signed under conditions of duress or fraud), in which Native American nations surrendered large portions of their land to the United States, designated parcels which the nations, as sovereigns, "reserved" to themselves, and those parcels came to be called "reservations". The term remained in use after the federal government began to forcibly relocate nations to parcels of land to which they often had no historical or cultural connection. Compared to other population centers in the U.S., reservations are disproportionately located on or near toxic sites hazardous to the health of those living or working in close proximity, including nuclear testing grounds and contaminated mines. The majority of American Indians (excluding Alaska Natives, who, with one exception, no longer have reservations) live outside the reservations, mainly in the larger western cities such as Phoenix and Los Angeles. In 2012, there were more than 2.5 million Native Americans, with 1 million living on reservations. History From the beginning of the European colonization of the Americas, Europeans often removed Indigenous peoples from their homelands. The means varied, including treaties made under considerable duress, forceful ejection, violence, and in a few cases voluntary moves based on mutual agreement. The removal caused many problems such as tribes losing the means of livelihood by being restricted to a defined area, poor quality of land for agriculture, and hostility between tribes. Early English settlers in the Americas entered into treaties with Native American tribes as a method of legitimizing their conquests in the face of competing claims by the Spanish Empire and violent resistance from the tribes themselves. Applying the term "treaty" to such unequal relationships may seem paradoxical from a modern perspective because in modern English, the word "treaty" usually connotes an agreement between two states of theoretically equal sovereignty, not an agreement between conquered people and a conqueror. However, in premodern times, it was common for European princes to routinely enter into unequal treaties with lesser dependent powers. The first reservation was established by the Treaty of Easton with the colonial governments of New Jersey and Pennsylvania on August 29, 1758. Located in southern New Jersey, it was called Brotherton Indian Reservation and also Edgepillock or Edgepelick. The area was 3,284 acres (13.29 km2). Today it is called Indian Mills in Shamong Township. In 1764 the British government's Board of Trade proposed the "Plan for the Future Management of Indian Affairs". Although never adopted formally, the plan established the British government's expectation that land would only be bought by colonial governments, not individuals, and that land would only be purchased at public meetings. Additionally, this plan dictated that the Indians would be properly consulted when ascertaining and defining the boundaries of colonial settlement. The private contracts that once characterized the sale of Indian land to various individuals and groups—from farmers to towns—were replaced by treaties between sovereigns. This protocol was adopted by the United States Government after the American Revolution. On March 11, 1824, U.S. Vice President John C. Calhoun founded the Office of Indian Affairs (now the Bureau of Indian Affairs) as a division of the United States Department of War (now the United States Department of Defense), to solve the land problem with 38 treaties with American Indian tribes. Indian Treaties, and Laws and Regulations Relating to Indian Affairs (1825) was a document signed by President Andrew Jackson in which he states that "we have placed the land reserves in a better state for the benefit of society" with approval of Indigenous reservations before 1850. The letter is signed by Isaac Shelby and Jackson. It discusses several regulations regarding the Native Americans and the approval of Indigenous segregation and the reservation system. President Martin Van Buren negotiated a treaty with the Saginaw Chippewas in 1837 to build a lighthouse. The President of the United States of America was directly involved in the creation of new treaties regarding Indian Reservations before 1850. Van Buren stated that indigenous reservations are "all their reserves of land in the state of Michigan, on the principle of said reserves being sold at the public land offices for their benefit and the actual proceeds being paid to them." The agreement dictated that the indigenous tribe sell their land to build a lighthouse. A treaty signed by John Forsyth, the Secretary of State on behalf of Van Buren, also dictates where indigenous peoples must live in terms of the reservation system in America between the Oneida People in 1838. This treaty allows the indigenous peoples five years on a specific reserve "the west shores of Saganaw bay". The creation of reservations for indigenous people of America could be as little as a five-year approval before 1850. Article two of the treaty claims "the reserves on the river Angrais and at Rifle river, of which said Indians are to have the usufruct and occupancy for five years." Indigenous people had restraints pushed on them by the five-year allowance. Scholarly author Buck Woodard used executive papers from Governor William H. Cabell in his article, "Indian Land sales and allotment in Antebellum Virginia" to discuss Indigenous reservations in America before 1705, specifically in Virginia. He claims "the colonial government again recognized the Nottoway's land rights by treaty in 1713, at the conclusion of the Tuscaro War." The indigenous peoples of America had land treaty agreements as early as 1713. The American Indigenous Reservation system started with "the Royal Proclamation of 1763, where Great Britain set aside an enormous resource for Indians in the territory of the present United States." The United States put forward another act when "Congress passed the Indian Removal Act in 1830". A third act pushed through was "the federal government relocated "portions of [the] 'Five Civilized Tribes' from the southeastern states in the Non-Intercourse Act of 1834." All three of these laws set into motion the Indigenous Reservation system in the United States of America, resulting in the forceful removal of Indigenous peoples into specific land Reservations. Scholarly author James Oberly discusses "The Treaty of 1831 between the Menominee Nation and the United States" in his article, "Decision on Duck Creek: Two Green Bay Reservations and Their Boundaries, 1816–1996", showing yet another treaty regarding Indigenous Reservations before 1850. There is a conflict between the Menomee Nation and the State of Wisconsin and "the 1831 Menomee Treaty … ran the boundary between the lands of the Oneida, known in the Treaty as the "New York Indians". This Treaty from 1831 is the cause of conflicts and is disputed because the land was good hunting grounds. The Trade and Intercourse Act of 1834 says "In the 1834 Indian Trade and Intercourse Act, the United States defined the boundaries of Indian County." Also, "For Unrau, Indigenous Country is less on Indigenous homeland and more a place where the U.S. removed Indians from east of the Mississippi River and applied unique laws." The United States of America applied laws on Indigenous Reservations depending on where they were located like the Mississippi River. This act came too, because "the federal government began to compress Indigenous lands because it needed to send troops to Texas during the Mexican-American War and protect American immigration traveling to Oregon and California." The Federal Government of America had their own needs and desires for Indigenous Land Reservations. He says, "the reconnaissance of explorers and other American officials understood that Indigenous Country possessed good land, bountiful game, and potential mineral resources." The American Government claimed Indigenous land for their own benefits with these creations of Indigenous Land Reservations . States such as Texas had their own policy when it came to Indian Reservations in America before 1850. Scholarly author George D. Harmon discusses Texas' own reservation system which "Prior to 1845, Texas had inaugurated and pursued her own Indian Policy of the U.S." Texas was one of the States before 1850 that chose to create their own reservation system as seen in Harmon's article, "The United States Indian Policy in Texas, 1845–1860." The State of "Texas had given only a few hundred acres of land in 1840, for the purpose of colonization". However, "In March 1847, … [a] special agent [was sent] to Texas to manage the Indian affairs in the State until Congress should take some definite and final action." The United States of America allowed its states to make up their own treaties such as this one in Texas for the purpose of colonization. The passage of the Indian Removal Act of 1830 marked the systematization of a U.S. federal government policy of moving Native populations away from European-populated areas, whether forcibly or voluntarily. One example was the Five Civilized Tribes, who were removed from their historical homelands in the Southeastern United States and moved to Indian Territory, in a forced mass migration that came to be known as the Trail of Tears. Some of the lands these tribes were given to inhabit following the removals eventually became Indian reservations. In 1851, the United States Congress passed the Indian Appropriations Act which authorized the creation of Indian reservations in Indian Territory (which became Oklahoma). Relations between white settlers and Natives had grown increasingly worse as the settlers encroached on territory and natural resources in the West. In 1868, President Ulysses S. Grant pursued a "Peace Policy" as an attempt to avoid violence. The policy included a reorganization of the Indian Service, with the goal of relocating various tribes from their ancestral homes to parcels of lands established specifically for their inhabitation. The policy called for the replacement of government officials by religious men, nominated by churches, to oversee the Indian agencies on reservations in order to teach Christianity to the Native American tribes. The Quakers were especially active in this policy on reservations. The policy was controversial from the start. Reservations were generally established by executive order. In many cases, white settlers objected to the size of land parcels, which were subsequently reduced. A report submitted to Congress in 1868 found widespread corruption among the federal Native American agencies and generally poor conditions among the relocated tribes. Many tribes ignored the relocation orders at first and were forced onto their limited land parcels. Enforcement of the policy required the United States Army to restrict the movements of various tribes. The pursuit of tribes in order to force them back onto reservations led to a number of wars with Native Americans which included some massacres. The most well-known conflict was the Sioux War on the northern Great Plains, between 1876 and 1881, which included the Battle of Little Bighorn. Other famous wars in this regard included the Nez Perce War and the Modoc War, which marked the last conflict officially declared a war. By the late 1870s, the policy established by President Grant was regarded as a failure, primarily because it had resulted in some of the bloodiest wars between Native Americans and the United States. By 1877, President Rutherford B. Hayes began phasing out the policy, and by 1882 all religious organizations had relinquished their authority to the federal Indian agency. In 1887, Congress undertook a significant change in reservation policy by the passage of the Dawes Act, or General Allotment (Severalty) Act. The act ended the general policy of granting land parcels to tribes as-a-whole by granting small parcels of land to individual tribe members. In some cases, for example, the Umatilla Indian Reservation, after the individual parcels were granted out of reservation land, the reservation area was reduced by giving the "excess land" to white settlers. The individual allotment policy continued until 1934 when it was terminated by the Indian Reorganization Act. The Indian Reorganization Act of 1934, also known as the Howard-Wheeler Act, was sometimes called the Indian New Deal and was initiated by John Collier. It laid out new rights for Native Americans, reversed some of the earlier privatization of their common holdings, and encouraged tribal sovereignty and land management by tribes. The act slowed the assignment of tribal lands to individual members and reduced the assignment of "extra" holdings to nonmembers. For the following 20 years, the U.S. government invested in infrastructure, health care, and education on the reservations. Likewise, over two million acres (8,000 km2) of land were returned to various tribes. Within a decade of Collier's retirement the government's position began to swing in the opposite direction. The new Indian Commissioners Myers and Emmons introduced the idea of the "withdrawal program" or "termination", which sought to end the government's responsibility and involvement with Indians and to force their assimilation. The Indians would lose their lands but were to be compensated, although many were not. Even though discontent and social rejection killed the idea before it was fully implemented, five tribes were terminated—the Coushatta, Ute, Paiute, Menominee and Klamath—and 114 groups in California lost their federal recognition as tribes. Many individuals were also relocated to cities, but one-third returned to their tribal reservations in the decades that followed. Governance Federally recognized Native American tribes possess limited tribal sovereignty and are able to exercise the right of self-governance, including but not limited to the ability to pass laws, regulate power and energy, create treaties, and hold tribal court hearings. Laws on tribal lands may vary from those of the surrounding area. The laws passed can, for example, permit legal casinos on reservations. The tribal council, not the local government or the United States federal government, often has jurisdiction over reservations. Different reservations have different systems of government, which may or may not replicate the forms of government found outside the reservation. With the establishment of reservations, tribal territories diminished to a fraction of their original areas; customary Native American practices of land tenure were sustained only for a time, and not in every instance. Instead, the federal government established regulations that subordinated tribes to the authority, first, of the military, and then of the Bureau (Office) of Indian Affairs. Under federal law, the government patented reservations to tribes, which became legal entities that at later times have operated in a corporate manner. Tribal tenure identifies jurisdiction over land-use planning and zoning, negotiating (with the close participation of the Bureau of Indian Affairs) leases for timber harvesting and mining. Tribes generally have authority over other forms of economic development such as ranching, agriculture, tourism, and casinos. Tribes hire both members, other Indians and non-Indians in varying capacities; they may run tribal stores, gas stations, and develop museums (e.g., there is a gas station and general store at Fort Hall Indian Reservation, Idaho, and a museum at Foxwoods, on the Mashantucket Pequot Indian Reservation in Connecticut). Tribal citizens may utilize several resources held in tribal tenures such as grazing range and some cultivable lands. They may also construct homes on tribally held lands. As such, members are tenants-in-common, which may be likened to communal tenure. Even if some of this pattern emanates from pre-reservation tribal customs, generally the tribe has the authority to modify tenant-in-common practices. With the General Allotment Act (Dawes), 1887, the government sought to individualize tribal lands by authorizing allotments held in individual tenure. Generally, the allocation process led to grouping family holdings and, in some cases, this sustained pre-reservation clan or other patterns. There had been a few allotment programs ahead of the Dawes Act. However, the vast fragmentation of reservations occurred from the enactment of this act up to 1934, when the Indian Reorganization Act was passed. However, Congress authorized some allotment programs in the ensuing years, such as on the Palm Springs/Agua Caliente Indian Reservation in California. Allotment set in motion a number of circumstances: The demographic factor, coupled with landownership data, led, for example, to litigation between the Devils Lake Sioux and the State of North Dakota, where non-Indians owned more acreage than tribal members even though more Native Americans resided on the reservation than non-Indians. The court decision turned, in part, on the perception of Indian character, contending that the tribe did not have jurisdiction over the alienated allotments. In a number of instances—e.g., the Yakama Indian Reservation—tribes have identified open and closed areas within reservations. One finds the majority of non-Indian landownership and residence in the open areas and, contrariwise, closed areas represent exclusive tribal residence and related conditions. Indian country today consists of tripartite government—i. e., federal, state and/or local, and tribal. Where state and local governments may exert some, but limited, law-and-order authority, tribal sovereignty is diminished. This situation prevails in connection with Indian gaming, because federal legislation makes the state a party to any contractual or statutory agreement. Finally, occupancy on reservations can be by virtue of tribal or individual tenure. There are many churches on reservations; most would occupy tribal land by consent of the federal government or the tribe. Bureau of Indian Affairs (BIA) agency offices, hospitals, schools, and other facilities usually occupy residual federal parcels within reservations. Many reservations include one or more sections (about 640 acres) of land for schools, but such land typically remains part of the reservation (e.g., Enabling Act of 1910 at Section 20). As a general practice, such land may sit idle or be used for cattle grazing by tribal ranchers. In 1979, the Seminole tribe in Florida opened a high-stakes bingo operation on its reservation in Florida. The state attempted to close the operation down but was stopped in the courts. In the 1980s, the case of California v. Cabazon Band of Mission Indians established the right of reservations to operate other forms of gambling operations. In 1988, Congress passed the Indian Gaming Regulatory Act, which recognized the right of Native American tribes to establish gambling and gaming facilities on their reservations as long as the states in which they are located have some form of legalized gambling. Today, many Native American casinos are used as tourist attractions, including as the basis for hotel and conference facilities, to draw visitors and revenue to reservations. Successful gaming operations on some reservations have greatly increased the economic wealth of some tribes, enabling their investment to improve infrastructure, education, and health for their people. Serious crime on Indian reservations has historically been required (by the 1885 Major Crimes Act, 18 U.S.C. §§1153, 3242, and court decisions) to be investigated by the federal government, usually the Federal Bureau of Investigation, and prosecuted by United States Attorneys of the United States federal judicial district in which the reservation lies. Tribal courts were limited to sentences of one year or less, until on July 29, 2010, the Tribal Law and Order Act was enacted which in some measure reforms the system permitting tribal courts to impose sentences of up to three years provided proceedings are recorded and additional rights are extended to defendants. The Justice Department on January 11, 2010, initiated the Indian Country Law Enforcement Initiative which recognizes problems with law enforcement on Indian reservations and assigns top priority to solving existing problems. The Department of Justice recognizes the unique legal relationship that the United States has with federally recognized tribes. As one aspect of this relationship, in much of Indian Country, the Justice Department alone has the authority to seek a conviction that carries an appropriate potential sentence when a serious crime has been committed. Our role as the primary prosecutor of serious crimes makes our responsibility to citizens in Indian Country unique and mandatory. Accordingly, public safety in tribal communities is a top priority for the Department of Justice. Emphasis was placed on improving prosecution of crimes involving domestic violence and sexual assault. Passed in 1953, Public Law 280 (PL 280) gave jurisdiction over criminal offenses involving Indians in Indian Country to certain States and allowed other States to assume jurisdiction. Subsequent legislation allowed States to retrocede jurisdiction, which has occurred in some areas. Some PL 280 reservations have experienced jurisdictional confusion, tribal discontent, and litigation, compounded by the lack of data on crime rates and law enforcement response. As of 2012, a high incidence of rape continued to impact Native American women. A survey of death certificates over a four-year period showed that deaths among Indians due to alcohol are about four times as common as in the general U.S. population and are often due to traffic collisions and liver disease with homicide, suicide, and falls also contributing. Deaths due to alcohol among American Indians are more common in men and among Northern Plains Indians. Alaska Natives showed the least incidence of death. Under federal law, alcohol sales are prohibited on Indian reservations unless the tribal councils allow it. Gang violence has become a major social problem. A December 13, 2009, article in The New York Times about growing gang violence on the Pine Ridge Indian Reservation estimated that there were 39 gangs with 5,000 members on that reservation alone. As opposed to traditional "Most Wanted" lists, Native Americans are often placed on regional Crime Stoppers lists offering rewards for their whereabouts. Disputes over land sovereignty When the Europeans encountered the New World, the American colonial government determined a precedent of establishing the land sovereignty of North America through treaties between countries. This precedent was upheld by the United States government. As a result, most Native American land was purchased by the United States government, a portion of which was designated to remain under Native sovereignty. The United States government and Native Peoples do not always agree on how land should be governed, which has resulted in a series of disputes over sovereignty. The Federal Government and The Lakota Sioux tribe members have been involved in sorting out a legal claim for the Black Hills since signing the 1868 Fort Laramie Treaty, which created what is known today as the Great Sioux Nation covering the Black Hills and nearly half of western South Dakota. This treaty was acknowledged and respected until 1874 when General George Custer discovered gold, sending a wave of settlers into the area and leading to the realization of the value of the land from United States President Grant. President Grant used tactical military force to remove the Sioux from the land and assisted in the development of the Congressional appropriations bill for Indian Services in 1876, a "starve or sell" treaty signed by only 10% of the 75% tribal men required based on specifications from the Fort Laramie Treaty that relinquished the Sioux's rights to the Black Hills. Following this treaty, the Agreement of 1877 was passed by Congress to remove the Sioux from the Black Hills, stating that the land was purchased from the Sioux despite the insufficient number of signatures, the lack of transaction records, and the tribe's claim that the land was never for sale. The Black Hills are sacred to the Sioux as a place central to their spirituality and identity, and contest of ownership of the land has been pressured in the courts by the Sioux Nation since they were allowed legal avenue in 1920. Beginning in 1923, the Sioux made a legal claim that their relinquishment from the Black Hills was illegal under the Fifth Amendment, and no amount of money can make up for the loss of their sacred land. This claim went all the way up to the Supreme Court United States v. Sioux Nation of Indians case in 1979 after being revived by Congress, and the Sioux were awarded over $100 million as they ruled that the seizure of the Black Hills was in fact illegal. The Sioux have continually rejected the money, and since then the award has been accruing interest in trust accounts and amounts to about $1 billion in 2015. During President Barack Obama's campaign, he made indications that the case of the Black Hills was going to be solved with innovative solutions and consultation, but this was questioned when White House Counsel Leonard Garment sent a note to The Ogala people saying, "The days of treaty-making with the American Indians ended in 1871; ...only Congress can rescind or change in any way statutes enacted since 1871." The He Sapa Reparations Alliance was established after Obama's inauguration to educate the Sioux people and propose a bill to Congress that would allocate 1.3 million acres of federal land within the Black Hills to the tribe. To this day, the dispute of the Black Hills is ongoing with the trust estimated to be worth nearly $1.3 billion and sources believe principles of restorative justice may be the best solution to addressing this century-old dispute. While the 1783 Treaty of Paris, which ended the American Revolution, addressed land sovereignty disputes between the British Crown and the colonies, it neglected to settle hostilities between Indigenous people—specifically those who fought on the side of the British, as four of the members of the Haudenosaunee did—and colonists. In October 1784 the newly formed United States government facilitated negotiations with representatives from the Six Nations in Fort Stanwix, New York. The treaty produced in 1784 resulted in Indians giving up their territory within the Ohio River Valley and the U.S. guaranteeing the Haudenosaunee six million acres—about half of what is present-day New York—as permanent homelands. Unenthusiastic about the treaty's conditions, the state of New York secured a series of 26 "leases", many of them lasting 999 years on all Native territories within its boundaries. Led to believe that they had already lost their land to the New York Genesee Company, the Haudenosaunee agreed to land leasing which was presented by New York Governor George Clinton as a means by which the Indigenous nations could maintain sovereignty over their land. On August 28, 1788, the Oneida people leased five million acres to the state in exchange for $2,000 in cash, $2,000 in clothing, $1,000 in provisions and $600 annual rent. The other two tribes followed with similar arrangements. The Holland Land Company gained control over all but ten acres of the Native land leased to the state on 15 September 1797. These 397 square miles were subsequently parceled out and subleased to whites, allegedly ending the Native title to land. Despite Iroquois protests, federal authorities did virtually nothing to correct the injustice. Certain of losing all of their lands, in 1831 most of the Oneidas asked that what was left of their holdings be exchanged for 500,000 acres purchased from the Menominees in Wisconsin. President Andrew Jackson, committed to Indian Removal west of the Mississippi, agreed. The Treaty of Buffalo Creek signed on 15 January 1838, directly ceded 102,069 acres of Seneca land to the Ogden company for $202,000, a sum that was divided evenly between the government—to hold in trust for Indians—and non-Indian individuals who wanted to buy and improve the plots. All that was left of the Cayuga, Oneida, Onondaga and Tuscarora holding was extinguished at a total cost of $400,000 to Ogden. After Indian complaints, a second Treaty of Buffalo was written in 1842 in attempts to mediate tension. Under this treaty the Haudenosaunee were given the right to reside in New York and small areas of reservations were restored by the U.S. government. These agreements were largely ineffective in protecting Native American land. By 1889 eighty percent of all Iroquois reservation land in New York was leased by non-Haudenosaunees. The modern-day Navajo and Hopi Indian Reservations are located in Northern Arizona, near the Four Corners area. The Hopi reservation is 2,531.773 square miles (6,557.26 km2) within Arizona and lies surrounded by the greater Navajo reservation which spans 27,413 square miles (71,000 km2) and extends slightly into the states of New Mexico and Utah. The Hopi, also known as the Pueblo people, made many spiritually motivated migrations throughout the Southwest before settling in present-day Northern Arizona. The Navajo people also migrated throughout western North America following spiritual commands before settling near the Grand Canyon area. The two tribes peacefully coexisted and even traded and exchanged ideas with each other. Their way of life was threatened when the "New people", what the Navajo called white settlers, began conquering native tribes across the continent and claiming their land, as a result of Andrew Jackson's Indian Removal Act. War ensued between the Navajo people, who call themselves the Diné, and new Americans. The result was the Long Walk in the early 1860s in which the entire tribe was forced to walk roughly 400 miles (640 km) from Fort Canby (present-day Window Rock, Arizona) to Bosque Redondo in New Mexico. This march is similar to the well-known Cherokee Trail of Tears and like it, many of the tribe did not survive the trek. The roughly 11,000 tribe members were imprisoned here in what the United States government deemed an experimental Indian reservation that failed because it became too expensive, there were too many people to feed, and they were continuously raided by other Native tribes. Consequently, in 1868, the Navajo were allowed to return to their homeland after signing the Treaty of Bosque Redondo. The treaty officially established the "Navajo Indian Reservation" in Northern Arizona. The term reservation is one that creates territorialities or claims on places. This treaty gave them the right to the land and semi-autonomous governance of it. The Hopi reservation, on the other hand, was created through an executive order by President Arthur in 1882. A few years after the two reservations were established, the Dawes Allotment Act was passed under which communal tribal land was divvied up and allocated to each household in an attempt to enforce European-American farming styles where each family owns and works their own plot of land. This was a further act of enclosure by the U.S. government. Each family received 640 acres (260 ha) or less and the remaining land was deemed "surplus" because it was more than the tribes needed. This "surplus" land was then made available for purchase by American citizens. The land designated to the Navajo and Hopi reservation was originally considered barren and unproductive by white settlers until 1921 when prospectors scoured the land for oil. The mining companies pressured the U.S. government to set up Native American councils on the reservations so that they could agree to contracts, specifically leases, in the name of the tribe. During World War II, uranium was mined on the Diné and Hopi reservations. The dangers of radiation exposure were not adequately explained to the Native people, who made up almost all the workforce of these mines, and lived in their immediate adjacency. As a result, some residents who lived near the uranium projects used the quarried rock from the mines to build their houses, these materials were radioactive and had detrimental health effects on the residents, including increased rates of kidney failure and cancer. During extraction some Native children would play in large water pools which were heavily contaminated with uranium created by mining activities. The companies also failed to properly dispose of the radioactive waste which did and will continue to pollute the environment, including the Natives' water sources. Many years later, these same men who worked the mines died from lung cancer, and their families received no form of financial compensation. In 1979, the Church Rock uranium mill spill was the largest release of radioactive waste in U.S. history. The spill contaminated the Puerco River with 1,000 tons of solid radioactive waste and 93 million gallons of acidic, radioactive tailings solution which flowed downstream into the Navajo Nation. The Navajos used the water from this river for irrigation and their livestock but were not immediately informed about the contamination and its danger. After the war ended, the American population boomed and energy demands soared. The utility companies needed a new source of power so they began the construction of coal-fired power plants. They placed these power plants in the four corners region. In the 1960s, John Boyden, an attorney working for both Peabody Coal and the Hopi tribe, the nation's largest coal producer, managed to gain rights to the Hopi land, including Black Mesa, a sacred location to both tribes which lay partially within the Joint Use Area of both tribes. Some consider this to be an example of environmental racism and injustice, per the principles established by the Participants of the First National People of Color Environmental Leadership Summit, because the Navajo and Hopi people, which are communities of color, low income, and political alienation, were disproportionately affected by the proximity and the resulting pollution of these power plants which disregard their right to clean air, their land was degraded, and because the related public policies are not based on mutual respect of all people. The mining companies, however, wanted more land but the joint ownership of the land made negotiations difficult. At the same time, Hopi and Navajo tribes were squabbling over land rights while Navajo livestock continuously grazed on Hopi land. Boyden took advantage of this situation, presenting it to the House Subcommittee on Indian Affairs claiming that if the government did not step in and do something, a bloody war would ensue between the tribes. Congressmen agreed to pass the Navajo-Hopi Land Settlement Act of 1974 which forced any Hopi and Navajo people living on the other's land to relocate. This affected 6,000 Navajo people and ultimately benefitted coal companies the most who could now more easily access the disputed land. Instead of using military violence to deal with those who refused to move, the government passed what became known as the Bennett Freeze to encourage the people to leave. The Bennett Freeze banned 1.5 million acres (6,100 km2) of Navajo land from any type of development, including paving roadways and even roof repair. This was meant to be a temporary incentive to push tribe negotiations but lasted over forty years until 2009 when President Obama lifted the moratorium. The legacy of the Bennett Freeze looms over the region as seen by the nearly third-world conditions on the reservation – seventy-five percent of people do not have access to electricity and housing situations are poor. Much of what is now Oklahoma was considered Indian Territory from the 1830s. The tribes in the area attempted to join the union as the Native-led State of Sequoyah in 1905 as a means of retaining control of their lands, but this was unsuccessful and the lands were merged into Oklahoma with the Enabling Act of 1906. This act had been taken to disestablish the reservation in order for the foundation of the state to proceed. In July 2020, the Supreme Court ruled in McGirt v. Oklahoma that the area, about half of the modern state, never lost its status as a Native reservation. This includes the city of Tulsa. The area includes lands of the Chickasaw, Choctaw, Cherokee, Muscogee, and Seminole. The ruling is based on an 1832 treaty, which the court ruled was still in force, adding that "Because Congress has not said otherwise, we hold the government to its word." In 2021, the Oklahoma Court of Criminal Appeals upheld a lower court ruling that the Quapaw Nation's reservation in Ottawa County in northeastern Oklahoma was never disestablished. In 2024, the Oklahoma Court of Criminal Appeals ruled that the Wyandotte Nation's reservation was also never disestablished. Court rulings also confirmed the existence of the reservations of the Miami Tribe of Oklahoma, Peoria Tribe of Indians of Oklahoma, and Ottawa Tribe of Oklahoma. Life and culture Many Native Americans who live on reservations interact with the federal government through two agencies: the Bureau of Indian Affairs and the Indian Health Service. The standard of living on some reservations is comparable to that in the developing world, with problems of infant mortality, low life expectancy, poor nutrition, poverty, and alcohol and drug abuse. The two poorest counties in the United States are Buffalo County, South Dakota, home of the Crow Creek Indian Reservation, and Oglala Lakota County, South Dakota, home of the Pine Ridge Indian Reservation, according to data compiled by the 2000 census. This disparity in living standards can partly be explained by centuries-long instances of settler colonialism which have systematically harmed Indigenous peoples' relations with land, and have attempted to erase their cultural ways of life. Potawatomi scholar Kyle Powys Whyte has stated, "While Indigenous peoples, as any society, have long histories of adapting to change, colonialism caused changes at such a rapid pace that many Indigenous peoples became vulnerable to harms, from health problems related to new diets to erosion of their cultures to the destruction of Indigenous diplomacy, to which they were not as susceptible prior to colonization." This has resulted in an ever widening disparity between Native peoples and the rest of the United States. It is commonly believed that environmentalism and a connectedness to nature are ingrained in the Native American culture. However, this is a generalization. In recent years, cultural historians have set out to reconstruct and complicate this notion as what they claim to be a culturally inaccurate romanticism. Others recognize the differences between the attitudes and perspectives that emerge from a comparison of Western European philosophy and Traditional Ecological Knowledge (TEK) of Indigenous peoples, especially when considering natural resource conflicts and management strategies involving multiple parties. Environmental issues The lands on which reservations are located are disproportionately low in natural resources and quality soil conducive to fostering economic prosperity. Starting in the mid-20th century reservations came to be increasingly located in areas contaminated with toxic runoff from current or historical industrial activities conducted by outside entities including private corporations as well as the federal government. According to anthropologists Merrill Singer and Derrick Hodge: "The toxic and poor land quality of Native American lands is neither a historical accident nor the result of any cultural deficiency on their part, but rather is the result of aggressive westward economic expansion. This process was calculated and unconcerned with indigenous wellbeing. [...] Thus, federal policy, including the Indian Removal Act of 1830, was designed to displace Native Americans from coveted land and to relocate them to areas seen as relatively "valueless by nineteenth century standards" Communities living on Native reservations are also disproportionately affected by environmental hazards. Due to them being deemed as "undesirable", lands on and near reservations are often used by the U.S. government and private industries as areas for environmentally hazardous activities. These activities include uranium mining, nuclear waste disposal, and military testing. Due to this, many reservation communities have been subjected to adverse health issues. Specifically, according to scholar Traci Lynn Voyles, the Navajo Nation has been affected for decades by uranium mining and nuclear waste dumping: "Radiation-related diseases are now endemic to many parts of the Navajo Nation, claiming the health and lives of former miners to be sure but also those of Navajos who would never see the inside of a mine. Diné children have a rate of testicular and ovarian cancer fifteen times the national average, and a fatal neurological disease called Navajo neuropathy has been closely linked to ingesting uranium-contaminated water during pregnancy". Other reservation communities have been subjected to instances like this as well. According to scholar Winona LaDuke, the Paiute- Shoshone community was deliberately exposed to radiation throughout the latter half of the 20th century: "In 1951 the Atomic Energy Commission set up the Nevada Test Site within Western Shoshone territory as a proving grounds for nuclear weapons. Between 1951 and 1992, the U.S. and UK exploded 1,054 nuclear devices above and below ground [...] According to Sanchez, the Atomic Energy Commission would deliberately wait for clouds to blow north before conducting tests, so that the fallout would avoid any heavily populated areas such as Las Vegas and Los Angeles. This meant that the Shoshones would get a larger dose." Many Indigenous communities have also been subjected to the degradation of sacred lands in favor of resource extraction. Around 79 percent of the lithium deposits on U.S. soil are within 35 miles of Indian reservations. Thacker Pass is home to both one of the largest lithium deposits in the world and home to a sacred burial site of multiple tribes including the Pitt River and Paiute. In 2021, the mining company Lithium Nevada was granted permission to mine the area by the Bureau of Land Management. Tribal members argue that these permits were unlawfully issued, and that "the BLM notified only three of Nevada's 27 tribes about the mine". Historically, Indigenous groups have had little say when it comes to which land they are designated to occupy, as well as what happens to the land. This can be explained by the following excerpt from an academic journal on the impacts of climate change in the Arctic: "While a government-to-government relationship is now officially required, these cases (which continue to define the indigenous/federal relationship in the U.S.) instituted a federal 'trust responsibility' for indigenous people in the U.S., codifying a legal relationship of paternalism that limits the autonomy of tribal governments. The United States government is thus under a legal obligation to protect the lands, resources and traditionally used areas of indigenous peoples, and government agencies are required to consult with tribal governments and Alaska Native Corporations in natural resource decision-making. While some view this form of representation as the best and only practical means of influencing Northern policy, the actual involvement of tribal governments has been limited, and seen as perfunctory, and may be precluded by the procedural and structural mandates of federal law and legal precedent." We can see this with the amount of reservations placed near massive construction projects that lead to pollution, such as landfills or the Dakotas Access Pipeline. In addition, the lands that Indigenous people are designated to occupy by the federal government typically have difficulties already. As explained by scholars Gregory Hooks and Chad Smith in their academic journal connecting the focus on production to environmental issues, "Federally owned and Native American lands tended to be in close proximity, and they had a great deal in common: they were concentrated in the states west of the Mississippi, and they tended to be lands that were too dry, remote, or barren to attract the attention of settlers and corporations." Reservations are often designated or located close to "superfund sites" areas designated by the U.S. Environmental Protection Agency (EPA) as polluted and hazardous to live in and requiring action to clean up. In 2014, of 1,322 Superfund sites in the United States, 532 or almost 25% where in Indian Country. Some of these include the Jackpile-Paguate Uranium Mine on the Pueblo of Laguna, the St. Regis Paper Company on the Leech Lake Band of Ojibwe's reservation, and the Sulphur Bank Mercury Mine on the Elem Band of Pomo Indians' reservation. See also Outside the U.S.: References Notes Further reading External links |
======================================== |
[SOURCE: https://github.com/partners] | [TOKENS: 233] |
Navigation Menu Search code, repositories, users, issues, pull requests... Provide feedback We read every piece of feedback, and take your input very seriously. Saved searches Use saved searches to filter your results more quickly To see all available qualifiers, see our documentation. Building stronger solutions together Strengthen your market position, boost differentiation, and get access to specialist guidance across Technology, Service & Channel, Startups, and Education. Enhance your credibility, gain industry recognition, and expand your market visibility by co-branding and messaging with GitHub. Leverage powerful training and enablement resources to earn certifications that differentiate you from the competition. Engage GitHub experts and resources to refine, test, and improve performance. Partner with GitHub to open doors, accelerate delivery, and win together. GitHub Partner programs Build innovative integrations and reach 100M+ developers. Scale your business, expand customer reach, and unlock co-selling opportunities. Grow your portfolio with GitHub’s secure, AI-powered platform. Empower learners and educators with GitHub tools. Join a global community of innovators to build better together today. Site-wide Links Get tips, technical guides, and best practices. Twice a month. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Joke#cite_note-FOOTNOTEDundes1997-76] | [TOKENS: 8460] |
Contents Joke A joke is a display of humour in which words are used within a specific and well-defined narrative structure to make people laugh and is usually not meant to be interpreted literally. It usually takes the form of a story, often with dialogue, and ends in a punch line, whereby the humorous element of the story is revealed; this can be done using a pun or other type of word play, irony or sarcasm, logical incompatibility, hyperbole, or other means. Linguist Robert Hetzron offers the definition: A joke is a short humorous piece of oral literature in which the funniness culminates in the final sentence, called the punchline… In fact, the main condition is that the tension should reach its highest level at the very end. No continuation relieving the tension should be added. As for its being "oral," it is true that jokes may appear printed, but when further transferred, there is no obligation to reproduce the text verbatim, as in the case of poetry. It is generally held that jokes benefit from brevity, containing no more detail than is needed to set the scene for the punchline at the end. In the case of riddle jokes or one-liners, the setting is implicitly understood, leaving only the dialogue and punchline to be verbalised. However, subverting these and other common guidelines can also be a source of humour—the shaggy dog story is an example of an anti-joke; although presented as a joke, it contains a long drawn-out narrative of time, place and character, rambles through many pointless inclusions and finally fails to deliver a punchline. Jokes are a form of humour, but not all humour is in the form of a joke. Some humorous forms which are not verbal jokes are: involuntary humour, situational humour, practical jokes, slapstick and anecdotes. Identified as one of the simple forms of oral literature by the Dutch linguist André Jolles, jokes are passed along anonymously. They are told in both private and public settings; a single person tells a joke to his friend in the natural flow of conversation, or a set of jokes is told to a group as part of scripted entertainment. Jokes are also passed along in written form or, more recently, through the internet. Stand-up comics, comedians and slapstick work with comic timing and rhythm in their performance, and may rely on actions as well as on the verbal punchline to evoke laughter. This distinction has been formulated in the popular saying "A comic says funny things; a comedian says things funny".[note 1] History in print Jokes do not belong to refined culture, but rather to the entertainment and leisure of all classes. As such, any printed versions were considered ephemera, i.e., temporary documents created for a specific purpose and intended to be thrown away. Many of these early jokes deal with scatological and sexual topics, entertaining to all social classes but not to be valued and saved.[citation needed] Various kinds of jokes have been identified in ancient pre-classical texts.[note 2] The oldest identified joke is an ancient Sumerian proverb from 1900 BC containing toilet humour: "Something which has never occurred since time immemorial; a young woman did not fart in her husband's lap." Its records were dated to the Old Babylonian period and the joke may go as far back as 2300 BC. The second oldest joke found, discovered on the Westcar Papyrus and believed to be about Sneferu, was from Ancient Egypt c. 1600 BC: "How do you entertain a bored pharaoh? You sail a boatload of young women dressed only in fishing nets down the Nile and urge the pharaoh to go catch a fish." The tale of the three ox drivers from Adab completes the three known oldest jokes in the world. This is a comic triple dating back to 1200 BC Adab. It concerns three men seeking justice from a king on the matter of ownership over a newborn calf, for whose birth they all consider themselves to be partially responsible. The king seeks advice from a priestess on how to rule the case, and she suggests a series of events involving the men's households and wives. The final portion of the story (which included the punch line), has not survived intact, though legible fragments suggest it was bawdy in nature. Jokes can be notoriously difficult to translate from language to language; particularly puns, which depend on specific words and not just on their meanings. For instance, Julius Caesar once sold land at a surprisingly cheap price to his lover Servilia, who was rumoured to be prostituting her daughter Tertia to Caesar in order to keep his favour. Cicero remarked that "conparavit Servilia hunc fundum tertia deducta." The punny phrase, "tertia deducta", can be translated as "with one-third off (in price)", or "with Tertia putting out." The earliest extant joke book is the Philogelos (Greek for The Laughter-Lover), a collection of 265 jokes written in crude ancient Greek dating to the fourth or fifth century AD. The author of the collection is obscure and a number of different authors are attributed to it, including "Hierokles and Philagros the grammatikos", just "Hierokles", or, in the Suda, "Philistion". British classicist Mary Beard states that the Philogelos may have been intended as a jokester's handbook of quips to say on the fly, rather than a book meant to be read straight through. Many of the jokes in this collection are surprisingly familiar, even though the typical protagonists are less recognisable to contemporary readers: the absent-minded professor, the eunuch, and people with hernias or bad breath. The Philogelos even contains a joke similar to Monty Python's "Dead Parrot Sketch". During the 15th century, the printing revolution spread across Europe following the development of the movable type printing press. This was coupled with the growth of literacy in all social classes. Printers turned out Jestbooks along with Bibles to meet both lowbrow and highbrow interests of the populace. One early anthology of jokes was the Facetiae by the Italian Poggio Bracciolini, first published in 1470. The popularity of this jest book can be measured on the twenty editions of the book documented alone for the 15th century. Another popular form was a collection of jests, jokes and funny situations attributed to a single character in a more connected, narrative form of the picaresque novel. Examples of this are the characters of Rabelais in France, Till Eulenspiegel in Germany, Lazarillo de Tormes in Spain and Master Skelton in England. There is also a jest book ascribed to William Shakespeare, the contents of which appear to both inform and borrow from his plays. All of these early jestbooks corroborate both the rise in the literacy of the European populations and the general quest for leisure activities during the Renaissance in Europe. The practice of printers using jokes and cartoons as page fillers was also widely used in the broadsides and chapbooks of the 19th century and earlier. With the increase in literacy in the general population and the growth of the printing industry, these publications were the most common forms of printed material between the 16th and 19th centuries throughout Europe and North America. Along with reports of events, executions, ballads and verse, they also contained jokes. Only one of many broadsides archived in the Harvard library is described as "1706. Grinning made easy; or, Funny Dick's unrivalled collection of curious, comical, odd, droll, humorous, witty, whimsical, laughable, and eccentric jests, jokes, bulls, epigrams, &c. With many other descriptions of wit and humour." These cheap publications, ephemera intended for mass distribution, were read alone, read aloud, posted and discarded. There are many types of joke books in print today; a search on the internet provides a plethora of titles available for purchase. They can be read alone for solitary entertainment, or used to stock up on new jokes to entertain friends. Some people try to find a deeper meaning in jokes, as in "Plato and a Platypus Walk into a Bar... Understanding Philosophy Through Jokes".[note 3] However a deeper meaning is not necessary to appreciate their inherent entertainment value. Magazines frequently use jokes and cartoons as filler for the printed page. Reader's Digest closes out many articles with an (unrelated) joke at the bottom of the article. The New Yorker was first published in 1925 with the stated goal of being a "sophisticated humour magazine" and is still known for its cartoons. Telling jokes Telling a joke is a cooperative effort; it requires that the teller and the audience mutually agree in one form or another to understand the narrative which follows as a joke. In a study of conversation analysis, the sociologist Harvey Sacks describes in detail the sequential organisation in the telling of a single joke. "This telling is composed, as for stories, of three serially ordered and adjacently placed types of sequences … the preface [framing], the telling, and the response sequences." Folklorists expand this to include the context of the joking. Who is telling what jokes to whom? And why is he telling them when? The context of the joke-telling in turn leads into a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who engage in institutionalised banter and joking. Framing is done with a (frequently formulaic) expression which keys the audience in to expect a joke. "Have you heard the one…", "Reminds me of a joke I heard…", "So, a lawyer and a doctor…"; these conversational markers are just a few examples of linguistic frames used to start a joke. Regardless of the frame used, it creates a social space and clear boundaries around the narrative which follows. Audience response to this initial frame can be acknowledgement and anticipation of the joke to follow. It can also be a dismissal, as in "this is no joking matter" or "this is no time for jokes". The performance frame serves to label joke-telling as a culturally marked form of communication. Both the performer and audience understand it to be set apart from the "real" world. "An elephant walks into a bar…"; a person sufficiently familiar with both the English language and the way jokes are told automatically understands that such a compressed and formulaic story, being told with no substantiating details, and placing an unlikely combination of characters into an unlikely setting and involving them in an unrealistic plot, is the start of a joke, and the story that follows is not meant to be taken at face value (i.e. it is non-bona-fide communication). The framing itself invokes a play mode; if the audience is unable or unwilling to move into play, then nothing will seem funny. Following its linguistic framing the joke, in the form of a story, can be told. It is not required to be verbatim text like other forms of oral literature such as riddles and proverbs. The teller can and does modify the text of the joke, depending both on memory and the present audience. The important characteristic is that the narrative is succinct, containing only those details which lead directly to an understanding and decoding of the punchline. This requires that it support the same (or similar) divergent scripts which are to be embodied in the punchline. The punchline is intended to make the audience laugh. A linguistic interpretation of this punchline/response is elucidated by Victor Raskin in his Script-based Semantic Theory of Humour. Humour is evoked when a trigger contained in the punchline causes the audience to abruptly shift its understanding of the story from the primary (or more obvious) interpretation to a secondary, opposing interpretation. "The punchline is the pivot on which the joke text turns as it signals the shift between the [semantic] scripts necessary to interpret [re-interpret] the joke text." To produce the humour in the verbal joke, the two interpretations (i.e. scripts) need to both be compatible with the joke text and opposite or incompatible with each other. Thomas R. Shultz, a psychologist, independently expands Raskin's linguistic theory to include "two stages of incongruity: perception and resolution." He explains that "… incongruity alone is insufficient to account for the structure of humour. […] Within this framework, humour appreciation is conceptualized as a biphasic sequence involving first the discovery of incongruity followed by a resolution of the incongruity." In the case of a joke, that resolution generates laughter. This is the point at which the field of neurolinguistics offers some insight into the cognitive processing involved in this abrupt laughter at the punchline. Studies by the cognitive science researchers Coulson and Kutas directly address the theory of script switching articulated by Raskin in their work. The article "Getting it: Human event-related brain response to jokes in good and poor comprehenders" measures brain activity in response to reading jokes. Additional studies by others in the field support more generally the theory of two-stage processing of humour, as evidenced in the longer processing time they require. In the related field of neuroscience, it has been shown that the expression of laughter is caused by two partially independent neuronal pathways: an "involuntary" or "emotionally driven" system and a "voluntary" system. This study adds credence to the common experience when exposed to an off-colour joke; a laugh is followed in the next breath by a disclaimer: "Oh, that's bad…" Here the multiple steps in cognition are clearly evident in the stepped response, the perception being processed just a breath faster than the resolution of the moral/ethical content in the joke. Expected response to a joke is laughter. The joke teller hopes the audience "gets it" and is entertained. This leads to the premise that a joke is actually an "understanding test" between individuals and groups. If the listeners do not get the joke, they are not understanding the two scripts which are contained in the narrative as they were intended. Or they do "get it" and do not laugh; it might be too obscene, too gross or too dumb for the current audience. A woman might respond differently to a joke told by a male colleague around the water cooler than she would to the same joke overheard in a women's lavatory. A joke involving toilet humour may be funnier told on the playground at elementary school than on a college campus. The same joke will elicit different responses in different settings. The punchline in the joke remains the same, however, it is more or less appropriate depending on the current context. The context explores the specific social situation in which joking occurs. The narrator automatically modifies the text of the joke to be acceptable to different audiences, while at the same time supporting the same divergent scripts in the punchline. The vocabulary used in telling the same joke at a university fraternity party and to one's grandmother might well vary. In each situation, it is important to identify both the narrator and the audience as well as their relationship with each other. This varies to reflect the complexities of a matrix of different social factors: age, sex, race, ethnicity, kinship, political views, religion, power relationships, etc. When all the potential combinations of such factors between the narrator and the audience are considered, then a single joke can take on infinite shades of meaning for each unique social setting. The context, however, should not be confused with the function of the joking. "Function is essentially an abstraction made on the basis of a number of contexts". In one long-term observation of men coming off the late shift at a local café, joking with the waitresses was used to ascertain sexual availability for the evening. Different types of jokes, going from general to topical into explicitly sexual humour signalled openness on the part of the waitress for a connection. This study describes how jokes and joking are used to communicate much more than just good humour. That is a single example of the function of joking in a social setting, but there are others. Sometimes jokes are used simply to get to know someone better. What makes them laugh, what do they find funny? Jokes concerning politics, religion or sexual topics can be used effectively to gauge the attitude of the audience to any one of these topics. They can also be used as a marker of group identity, signalling either inclusion or exclusion for the group. Among pre-adolescents, "dirty" jokes allow them to share information about their changing bodies. And sometimes joking is just simple entertainment for a group of friends. Relationships The context of joking in turn leads to a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who take part in institutionalised banter and joking. These relationships can be either one-way or a mutual back and forth between partners. The joking relationship is defined as a peculiar combination of friendliness and antagonism. The behaviour is such that in any other social context it would express and arouse hostility; but it is not meant seriously and must not be taken seriously. There is a pretence of hostility along with a real friendliness. To put it in another way, the relationship is one of permitted disrespect. Joking relationships were first described by anthropologists within kinship groups in Africa. But they have since been identified in cultures around the world, where jokes and joking are used to mark and reinforce appropriate boundaries of a relationship. Electronic The advent of electronic communications at the end of the 20th century introduced new traditions into jokes. A verbal joke or cartoon is emailed to a friend or posted on a bulletin board; reactions include a replied email with a :-) or LOL, or a forward on to further recipients. Interaction is limited to the computer screen and for the most part solitary. While preserving the text of a joke, both context and variants are lost in internet joking; for the most part, emailed jokes are passed along verbatim. The framing of the joke frequently occurs in the subject line: "RE: laugh for the day" or something similar. The forward of an email joke can increase the number of recipients exponentially. Internet joking forces a re-evaluation of social spaces and social groups. They are no longer only defined by physical presence and locality, they also exist in the connectivity in cyberspace. "The computer networks appear to make possible communities that, although physically dispersed, display attributes of the direct, unconstrained, unofficial exchanges folklorists typically concern themselves with". This is particularly evident in the spread of topical jokes, "that genre of lore in which whole crops of jokes spring up seemingly overnight around some sensational event … flourish briefly and then disappear, as the mass media move on to fresh maimings and new collective tragedies". This correlates with the new understanding of the internet as an "active folkloric space" with evolving social and cultural forces and clearly identifiable performers and audiences. A study by the folklorist Bill Ellis documented how an evolving cycle was circulated over the internet. By accessing message boards that specialised in humour immediately following the 9/11 disaster, Ellis was able to observe in real-time both the topical jokes being posted electronically and responses to the jokes. Previous folklore research has been limited to collecting and documenting successful jokes, and only after they had emerged and come to folklorists' attention. Now, an Internet-enhanced collection creates a time machine, as it were, where we can observe what happens in the period before the risible moment, when attempts at humour are unsuccessful Access to archived message boards also enables us to track the development of a single joke thread in the context of a more complicated virtual conversation. Joke cycles A joke cycle is a collection of jokes about a single target or situation which displays consistent narrative structure and type of humour. Some well-known cycles are elephant jokes using nonsense humour, dead baby jokes incorporating black humour, and light bulb jokes, which describe all kinds of operational stupidity. Joke cycles can centre on ethnic groups, professions (viola jokes), catastrophes, settings (…walks into a bar), absurd characters (wind-up dolls), or logical mechanisms which generate the humour (knock-knock jokes). A joke can be reused in different joke cycles; an example of this is the same Head & Shoulders joke refitted to the tragedies of Vic Morrow, Admiral Mountbatten and the crew of the Challenger space shuttle.[note 4] These cycles seem to appear spontaneously, spread rapidly across countries and borders only to dissipate after some time. Folklorists and others have studied individual joke cycles in an attempt to understand their function and significance within the culture. Joke cycles circulated in the recent past include: As with the 9/11 disaster discussed above, cycles attach themselves to celebrities or national catastrophes such as the death of Diana, Princess of Wales, the death of Michael Jackson, and the Space Shuttle Challenger disaster. These cycles arise regularly as a response to terrible unexpected events which command the national news. An in-depth analysis of the Challenger joke cycle documents a change in the type of humour circulated following the disaster, from February to March 1986. "It shows that the jokes appeared in distinct 'waves', the first responding to the disaster with clever wordplay and the second playing with grim and troubling images associated with the event…The primary social function of disaster jokes appears to be to provide closure to an event that provoked communal grieving, by signalling that it was time to move on and pay attention to more immediate concerns". The sociologist Christie Davies has written extensively on ethnic jokes told in countries around the world. In ethnic jokes he finds that the "stupid" ethnic target in the joke is no stranger to the culture, but rather a peripheral social group (geographic, economic, cultural, linguistic) well known to the joke tellers. So Americans tell jokes about Polacks and Italians, Germans tell jokes about Ostfriesens, and the English tell jokes about the Irish. In a review of Davies' theories it is said that "For Davies, [ethnic] jokes are more about how joke tellers imagine themselves than about how they imagine those others who serve as their putative targets…The jokes thus serve to center one in the world – to remind people of their place and to reassure them that they are in it." A third category of joke cycles identifies absurd characters as the butt: for example the grape, the dead baby or the elephant. Beginning in the 1960s, social and cultural interpretations of these joke cycles, spearheaded by the folklorist Alan Dundes, began to appear in academic journals. Dead baby jokes are posited to reflect societal changes and guilt caused by widespread use of contraception and abortion beginning in the 1960s.[note 5] Elephant jokes have been interpreted variously as stand-ins for American blacks during the Civil Rights Era or as an "image of something large and wild abroad in the land captur[ing] the sense of counterculture" of the sixties. These interpretations strive for a cultural understanding of the themes of these jokes which go beyond the simple collection and documentation undertaken previously by folklorists and ethnologists. Classification systems As folktales and other types of oral literature became collectables throughout Europe in the 19th century (Brothers Grimm et al.), folklorists and anthropologists of the time needed a system to organise these items. The Aarne–Thompson classification system was first published in 1910 by Antti Aarne, and later expanded by Stith Thompson to become the most renowned classification system for European folktales and other types of oral literature. Its final section addresses anecdotes and jokes, listing traditional humorous tales ordered by their protagonist; "This section of the Index is essentially a classification of the older European jests, or merry tales – humorous stories characterized by short, fairly simple plots. …" Due to its focus on older tale types and obsolete actors (e.g., numbskull), the Aarne–Thompson Index does not provide much help in identifying and classifying the modern joke. A more granular classification system used widely by folklorists and cultural anthropologists is the Thompson Motif Index, which separates tales into their individual story elements. This system enables jokes to be classified according to individual motifs included in the narrative: actors, items and incidents. It does not provide a system to classify the text by more than one element at a time while at the same time making it theoretically possible to classify the same text under multiple motifs. The Thompson Motif Index has spawned further specialised motif indices, each of which focuses on a single aspect of one subset of jokes. A sampling of just a few of these specialised indices have been listed under other motif indices. Here one can select an index for medieval Spanish folk narratives, another index for linguistic verbal jokes, and a third one for sexual humour. To assist the researcher with this increasingly confusing situation, there are also multiple bibliographies of indices as well as a how-to guide on creating your own index. Several difficulties have been identified with these systems of identifying oral narratives according to either tale types or story elements. A first major problem is their hierarchical organisation; one element of the narrative is selected as the major element, while all other parts are arrayed subordinate to this. A second problem with these systems is that the listed motifs are not qualitatively equal; actors, items and incidents are all considered side-by-side. And because incidents will always have at least one actor and usually have an item, most narratives can be ordered under multiple headings. This leads to confusion about both where to order an item and where to find it. A third significant problem is that the "excessive prudery" common in the middle of the 20th century means that obscene, sexual and scatological elements were regularly ignored in many of the indices. The folklorist Robert Georges has summed up the concerns with these existing classification systems: …Yet what the multiplicity and variety of sets and subsets reveal is that folklore [jokes] not only takes many forms, but that it is also multifaceted, with purpose, use, structure, content, style, and function all being relevant and important. Any one or combination of these multiple and varied aspects of a folklore example [such as jokes] might emerge as dominant in a specific situation or for a particular inquiry. It has proven difficult to organise all different elements of a joke into a multi-dimensional classification system which could be of real value in the study and evaluation of this (primarily oral) complex narrative form. The General Theory of Verbal Humour or GTVH, developed by the linguists Victor Raskin and Salvatore Attardo, attempts to do exactly this. This classification system was developed specifically for jokes and later expanded to include longer types of humorous narratives. Six different aspects of the narrative, labelled Knowledge Resources or KRs, can be evaluated largely independently of each other, and then combined into a concatenated classification label. These six KRs of the joke structure include: As development of the GTVH progressed, a hierarchy of the KRs was established to partially restrict the options for lower-level KRs depending on the KRs defined above them. For example, a lightbulb joke (SI) will always be in the form of a riddle (NS). Outside of these restrictions, the KRs can create a multitude of combinations, enabling a researcher to select jokes for analysis which contain only one or two defined KRs. It also allows for an evaluation of the similarity or dissimilarity of jokes depending on the similarity of their labels. "The GTVH presents itself as a mechanism … of generating [or describing] an infinite number of jokes by combining the various values that each parameter can take. … Descriptively, to analyze a joke in the GTVH consists of listing the values of the 6 KRs (with the caveat that TA and LM may be empty)." This classification system provides a functional multi-dimensional label for any joke, and indeed any verbal humour. Joke and humour research Many academic disciplines lay claim to the study of jokes (and other forms of humour) as within their purview. Fortunately, there are enough jokes, good, bad and worse, to go around. The studies of jokes from each of the interested disciplines bring to mind the tale of the blind men and an elephant where the observations, although accurate reflections of their own competent methodological inquiry, frequently fail to grasp the beast in its entirety. This attests to the joke as a traditional narrative form which is indeed complex, concise and complete in and of itself. It requires a "multidisciplinary, interdisciplinary, and cross-disciplinary field of inquiry" to truly appreciate these nuggets of cultural insight.[note 6] Sigmund Freud was one of the first modern scholars to recognise jokes as an important object of investigation. In his 1905 study Jokes and their Relation to the Unconscious Freud describes the social nature of humour and illustrates his text with many examples of contemporary Viennese jokes. His work is particularly noteworthy in this context because Freud distinguishes in his writings between jokes, humour and the comic. These are distinctions which become easily blurred in many subsequent studies where everything funny tends to be gathered under the umbrella term of "humour", making for a much more diffuse discussion. Since the publication of Freud's study, psychologists have continued to explore humour and jokes in their quest to explain, predict and control an individual's "sense of humour". Why do people laugh? Why do people find something funny? Can jokes predict character, or vice versa, can character predict the jokes an individual laughs at? What is a "sense of humour"? A current review of the popular magazine Psychology Today lists over 200 articles discussing various aspects of humour; in psychological jargon, the subject area has become both an emotion to measure and a tool to use in diagnostics and treatment. A new psychological assessment tool, the Values in Action Inventory developed by the American psychologists Christopher Peterson and Martin Seligman includes humour (and playfulness) as one of the core character strengths of an individual. As such, it could be a good predictor of life satisfaction. For psychologists, it would be useful to measure both how much of this strength an individual has and how it can be measurably increased. A 2007 survey of existing tools to measure humour identified more than 60 psychological measurement instruments. These measurement tools use many different approaches to quantify humour along with its related states and traits. There are tools to measure an individual's physical response by their smile; the Facial Action Coding System (FACS) is one of several tools used to identify any one of multiple types of smiles. Or the laugh can be measured to calculate the funniness response of an individual; multiple types of laughter have been identified. It must be stressed here that both smiles and laughter are not always a response to something funny. In trying to develop a measurement tool, most systems use "jokes and cartoons" as their test materials. However, because no two tools use the same jokes, and across languages this would not be feasible, how does one determine that the assessment objects are comparable? Moving on, whom does one ask to rate the sense of humour of an individual? Does one ask the person themselves, an impartial observer, or their family, friends and colleagues? Furthermore, has the current mood of the test subjects been considered; someone with a recent death in the family might not be much prone to laughter. Given the plethora of variants revealed by even a superficial glance at the problem, it becomes evident that these paths of scientific inquiry are mined with problematic pitfalls and questionable solutions. The psychologist Willibald Ruch [de] has been very active in the research of humour. He has collaborated with the linguists Raskin and Attardo on their General Theory of Verbal Humour (GTVH) classification system. Their goal is to empirically test both the six autonomous classification types (KRs) and the hierarchical ordering of these KRs. Advancement in this direction would be a win-win for both fields of study; linguistics would have empirical verification of this multi-dimensional classification system for jokes, and psychology would have a standardised joke classification with which they could develop verifiably comparable measurement tools. "The linguistics of humor has made gigantic strides forward in the last decade and a half and replaced the psychology of humor as the most advanced theoretical approach to the study of this important and universal human faculty." This recent statement by one noted linguist and humour researcher describes, from his perspective, contemporary linguistic humour research. Linguists study words, how words are strung together to build sentences, how sentences create meaning which can be communicated from one individual to another, and how our interaction with each other using words creates discourse. Jokes have been defined above as oral narratives in which words and sentences are engineered to build toward a punchline. The linguist's question is: what exactly makes the punchline funny? This question focuses on how the words used in the punchline create humour, in contrast to the psychologist's concern (see above) with the audience's response to the punchline. The assessment of humour by psychologists "is made from the individual's perspective; e.g. the phenomenon associated with responding to or creating humor and not a description of humor itself." Linguistics, on the other hand, endeavours to provide a precise description of what makes a text funny. Two major new linguistic theories have been developed and tested within the last decades. The first was advanced by Victor Raskin in "Semantic Mechanisms of Humor", published 1985. While being a variant on the more general concepts of the incongruity theory of humour, it is the first theory to identify its approach as exclusively linguistic. The Script-based Semantic Theory of Humour (SSTH) begins by identifying two linguistic conditions which make a text funny. It then goes on to identify the mechanisms involved in creating the punchline. This theory established the semantic/pragmatic foundation of humour as well as the humour competence of speakers.[note 7] Several years later the SSTH was incorporated into a more expansive theory of jokes put forth by Raskin and his colleague Salvatore Attardo. In the General Theory of Verbal Humour, the SSTH was relabelled as a Logical Mechanism (LM) (referring to the mechanism which connects the different linguistic scripts in the joke) and added to five other independent Knowledge Resources (KR). Together these six KRs could now function as a multi-dimensional descriptive label for any piece of humorous text. Linguistics has developed further methodological tools which can be applied to jokes: discourse analysis and conversation analysis of joking. Both of these subspecialties within the field focus on "naturally occurring" language use, i.e. the analysis of real (usually recorded) conversations. One of these studies has already been discussed above, where Harvey Sacks describes in detail the sequential organisation in telling a single joke. Discourse analysis emphasises the entire context of social joking, the social interaction which cradles the words. Folklore and cultural anthropology have perhaps the strongest claims on jokes as belonging to their bailiwick. Jokes remain one of the few remaining forms of traditional folk literature transmitted orally in western cultures. Identified as one of the "simple forms" of oral literature by André Jolles in 1930, they have been collected and studied since there were folklorists and anthropologists abroad in the lands. As a genre they were important enough at the beginning of the 20th century to be included under their own heading in the Aarne–Thompson index first published in 1910: Anecdotes and jokes. Beginning in the 1960s, cultural researchers began to expand their role from collectors and archivists of "folk ideas" to a more active role of interpreters of cultural artefacts. One of the foremost scholars active during this transitional time was the folklorist Alan Dundes. He started asking questions of tradition and transmission with the key observation that "No piece of folklore continues to be transmitted unless it means something, even if neither the speaker nor the audience can articulate what that meaning might be." In the context of jokes, this then becomes the basis for further research. Why is the joke told right now? Only in this expanded perspective is an understanding of its meaning to the participants possible. This questioning resulted in a blossoming of monographs to explore the significance of many joke cycles. What is so funny about absurd nonsense elephant jokes? Why make light of dead babies? In an article on contemporary German jokes about Auschwitz and the Holocaust, Dundes justifies this research: Whether one finds Auschwitz jokes funny or not is not an issue. This material exists and should be recorded. Jokes are always an important barometer of the attitudes of a group. The jokes exist and they obviously must fill some psychic need for those individuals who tell them and those who listen to them. A stimulating generation of new humour theories flourishes like mushrooms in the undergrowth: Elliott Oring's theoretical discussions on "appropriate ambiguity" and Amy Carrell's hypothesis of an "audience-based theory of verbal humor (1993)" to name just a few. In his book Humor and Laughter: An Anthropological Approach, the anthropologist Mahadev Apte presents a solid case for his own academic perspective. "Two axioms underlie my discussion, namely, that humor is by and large culture based and that humor can be a major conceptual and methodological tool for gaining insights into cultural systems." Apte goes on to call for legitimising the field of humour research as "humorology"; this would be a field of study incorporating an interdisciplinary character of humour studies. While the label "humorology" has yet to become a household word, great strides are being made in the international recognition of this interdisciplinary field of research. The International Society for Humor Studies was founded in 1989 with the stated purpose to "promote, stimulate and encourage the interdisciplinary study of humour; to support and cooperate with local, national, and international organizations having similar purposes; to organize and arrange meetings; and to issue and encourage publications concerning the purpose of the society". It also publishes Humor: International Journal of Humor Research and holds yearly conferences to promote and inform its speciality. In 1872, Charles Darwin published one of the first "comprehensive and in many ways remarkably accurate description of laughter in terms of respiration, vocalization, facial action and gesture and posture" (Laughter) in The Expression of the Emotions in Man and Animals. In this early study Darwin raises further questions about who laughs and why they laugh; the myriad responses since then illustrate the complexities of this behaviour. To understand laughter in humans and other primates, the science of gelotology (from the Greek gelos, meaning laughter) has been established; it is the study of laughter and its effects on the body from both a psychological and physiological perspective. While jokes can provoke laughter, laughter cannot be used as a one-to-one marker of jokes because there are multiple stimuli to laughter, humour being just one of them. The other six causes of laughter listed are social context, ignorance, anxiety, derision, acting apology, and tickling. As such, the study of laughter is a secondary albeit entertaining perspective in an understanding of jokes. Computational humour is a new field of study which uses computers to model humour; it bridges the disciplines of computational linguistics and artificial intelligence. A primary ambition of this field is to develop computer programs which can both generate a joke and recognise a text snippet as a joke. Early programming attempts have dealt almost exclusively with punning because this lends itself to simple straightforward rules. These primitive programs display no intelligence; instead, they work off a template with a finite set of pre-defined punning options upon which to build. More sophisticated computer joke programs have yet to be developed. Based on our understanding of the SSTH / GTVH humour theories, it is easy to see why. The linguistic scripts (a.k.a. frames) referenced in these theories include, for any given word, a "large chunk of semantic information surrounding the word and evoked by it [...] a cognitive structure internalized by the native speaker". These scripts extend much further than the lexical definition of a word; they contain the speaker's complete knowledge of the concept as it exists in his world. As insentient machines, computers lack the encyclopaedic scripts which humans gain through life experience. They also lack the ability to gather the experiences needed to build wide-ranging semantic scripts and understand language in a broader context, a context that any child picks up in daily interaction with his environment. Further development in this field must wait until computational linguists have succeeded in programming a computer with an ontological semantic natural language processing system. It is only "the most complex linguistic structures [which] can serve any formal and/or computational treatment of humor well". Toy systems (i.e. dummy punning programs) are completely inadequate to the task. Despite the fact that the field of computational humour is small and underdeveloped, it is encouraging to note the many interdisciplinary efforts which are currently underway. See also Notes References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Internet#cite_note-Pelkeyp42-25] | [TOKENS: 9291] |
Contents Internet The Internet (or internet)[a] is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks that comprises private, public, academic, business, and government networks of local to global scope, linked by electronic, wireless, and optical networking technologies. The Internet carries a vast range of information services and resources, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, discussion groups, internet telephony, streaming media and file sharing. Most traditional communication media, including telephone, radio, television, paper mail, newspapers, and print publishing, have been transformed by the Internet, giving rise to new media such as email, online music, digital newspapers, news aggregators, and audio and video streaming websites. The Internet has enabled and accelerated new forms of personal interaction through instant messaging, Internet forums, and social networking services. Online shopping has also grown to occupy a significant market across industries, enabling firms to extend brick and mortar presences to serve larger markets. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet date back to research that enabled the time-sharing of computer resources, the development of packet switching, and the design of computer networks for data communication. The set of communication protocols to enable internetworking on the Internet arose from research and development commissioned in the 1970s by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense in collaboration with universities and researchers across the United States and in the United Kingdom and France. The Internet has no single centralized governance in either technological implementation or policies for access and usage. Each constituent network sets its own policies. The overarching definitions of the two principal name spaces on the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the non-profit Internet Engineering Task Force (IETF). Terminology The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1945 by the United States War Department in a radio operator's manual, and in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks. The word Internet may be capitalized as a proper noun, although this is becoming less common. This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases. The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web, or the Web, is only one of a large number of Internet services. It is the global collection of web pages, documents and other web resources linked by hyperlinks and URLs. History In the 1960s, computer scientists began developing systems for time-sharing of computer resources. J. C. R. Licklider proposed the idea of a universal network while working at Bolt Beranek & Newman and, later, leading the Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA) of the United States Department of Defense. Research into packet switching,[c] one of the fundamental Internet technologies, started in the work of Paul Baran at RAND in the early 1960s and, independently, Donald Davies at the United Kingdom's National Physical Laboratory in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network was incorporated into the design of the ARPANET, an experimental resource sharing network proposed by ARPA. ARPANET development began with two network nodes which were interconnected between the University of California, Los Angeles and the Stanford Research Institute on 29 October 1969. The third site was at the University of California, Santa Barbara, followed by the University of Utah. By the end of 1971, 15 sites were connected to the young ARPANET. Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States. Other user networks and research networks, such as the Merit Network and CYCLADES, were developed in the late 1960s and early 1970s. Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSAR and, later, NDRE) and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first internetwork for resource sharing. ARPA projects, the International Network Working Group and commercial initiatives led to the development of various protocols and standards by which multiple separate networks could become a single network, or a network of networks. In 1974, Vint Cerf at Stanford University and Bob Kahn at DARPA published a proposal for "A Protocol for Packet Network Intercommunication". Cerf and his graduate students used the term internet as a shorthand for internetwork in RFC 675. The Internet Experiment Notes and later RFCs repeated this use. The work of Louis Pouzin and Robert Metcalfe had important influences on the resulting TCP/IP design. National PTTs and commercial providers developed the X.25 standard and deployed it on public data networks. The ARPANET initially served as a backbone for the interconnection of regional academic and military networks in the United States to enable resource sharing. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89. Although other network protocols such as UUCP and PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. The linking of commercial networks and enterprises by the early 1990s, as well as the advent of the World Wide Web, marked the beginning of the transition to the modern Internet. Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Later in 1990, Tim Berners-Lee began writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also an HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance. Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. In November 2006, the Internet was included on USA Today's list of the New Seven Wonders. As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. Modern smartphones can access the Internet through cellular carrier networks, and internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016. As of 2018[update], 80% of the world's population were covered by a 4G network. The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years, especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most. One solution, zero-rating, is the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost. Social impact The Internet has enabled new forms of social interaction, activities, and social associations, giving rise to the scholarly study of the sociology of the Internet. Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion. By 2010, 22% of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users. China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022, China had a 70% penetration rate compared to India's 60% and the United States's 90%. In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and the Caribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania. In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highest Internet penetration by the number of users, with 93% or more of the population with access. As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). Modern character encoding standards, such as Unicode, allow for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly.[citation needed] Educational material at all levels from pre-school (e.g. CBeebies) to post-doctoral (e.g. scholarly literature through Google Scholar) is available on websites. The internet has facilitated the development of virtual universities and distance education, enabling both formal and informal education. The Internet allows researchers to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population".: 111 Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all sites in terms of traffic. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Streaming media is the real-time delivery of digital media for immediate consumption or enjoyment by end users. Streaming companies (such as Netflix, Disney+, Amazon's Prime Video, Mubi, Hulu, and Apple TV+) now dominate the entertainment industry, eclipsing traditional broadcasters. Audio streamers such as Spotify and Apple Music also have significant market share in the audio entertainment market. Video sharing websites are also a major factor in the entertainment ecosystem. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses a web player to stream and show video files. YouTube users watch hundreds of millions, and upload hundreds of thousands, of videos daily. Other video sharing websites include Vimeo, Instagram and TikTok.[citation needed] Although many governments have attempted to restrict both Internet pornography and online gambling, this has generally failed to stop their widespread popularity. A number of advertising-funded ostensible video sharing websites known as "tube sites" have been created to host shared pornographic video content. Due to laws requiring the documentation of the origin of pornography, these websites now largely operate in conjunction with pornographic movie studios and their own independent creator networks, acting as de-facto video streaming services. Major players in this field include the market leader Aylo, the operator of PornHub and numerous other branded sites, as well as other independent operators such as xHamster and Xvideos. As of 2023[update], Internet traffic to pornographic video sites rivalled that of mainstream video streaming and sharing services. Remote work is facilitated by tools such as groupware, virtual private networks, conference calling, videotelephony, and VoIP so that work may be performed from any location, such as the worker's home.[citation needed] The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software, which allow groups to easily form, cheaply communicate, and share ideas. An example of collaborative software is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice).[citation needed] Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work.[citation needed] The internet also allows for cloud computing, virtual private networks, remote desktops, and remote work.[citation needed] The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment, including insults and hate speech, to, in extreme cases, rape and death threats, in response to posts they have made on social media. Social media companies have been criticized in the past for not doing enough to aid victims of online abuse. Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from pornography or violent content on the Internet. The most popular social networking services commonly forbid users under the age of 13. However, these policies can be circumvented by registering an account with a false birth date, and a significant number of children aged under 13 join such sites.[citation needed] Social networking services for younger children, which claim to provide better levels of protection for children, also exist. Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.[citation needed] Cyberslacking can become a drain on corporate resources; employees spend a significant amount of time surfing the Web while at work. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. Electronic business encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion in 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. A 2013 Institute for Local Self-Reliance report states that brick-and-mortar retailers employ 47 people for every $10 million in sales, while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people. Advertising on popular web pages can be lucrative, and e-commerce. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television.: 19 Many common online advertising practices are controversial and increasingly subject to regulation. The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism. Social media websites, such as Facebook and Twitter, helped people organize the Arab Spring, by helping activists organize protests, communicate grievances, and disseminate information. Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies. E-government is the use of technological communications devices, such as the Internet, to provide public services to citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens. Cybersectarianism is a new organizational form that involves: highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards. In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.[citation needed] Applications and services The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services. The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. HyperText Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet. World Wide Web browser software, such as Microsoft Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain computer data, including graphics, sounds, text, video, multimedia and interactive content. Client-side scripts can include animations, games, office applications and scientific demonstrations. Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Internet telephony is a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP).[citation needed] VoIP systems now dominate many markets, being as easy and convenient as a traditional telephone, while having substantial cost savings, especially over long distances. File sharing is the practice of transferring large amounts of data in the form of computer files across the Internet, for example via file servers. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. Access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by a digital signature. Governance The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the IETF. The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices when implementing Internet technologies. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. The organization coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. Regional Internet registries (RIRs) were established for five regions of the world to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region:[citation needed] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.[citation needed] Infrastructure The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, and modems. However, as an example of internetworking, many of the network nodes are not necessarily Internet equipment per se. Internet packets are carried by other full-fledged networking protocols, with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.[citation needed] Internet service providers (ISPs) establish worldwide connectivity between individual networks at various levels of scope. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fiber-optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy.[citation needed] An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.[citation needed] Common methods of Internet access by users include broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology.[citation needed] Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. Most servers that provide internet services are today hosted in data centers, and content is often accessed through high-performance content delivery networks. Colocation centers often host private peering connections between their customers, internet transit providers, cloud providers, meet-me rooms for connecting customers together, Internet exchange points, and landing points and terminal equipment for fiber optic submarine communication cables, connecting the internet. Internet Protocol Suite The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in RFC 1122 and RFC 1123:[citation needed] The most prominent component of the Internet model is the Internet Protocol. IP enables internetworking, essentially establishing the Internet itself. Two versions of the Internet Protocol exist, IPv4 and IPv6.[citation needed] Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe the exchange of data over the network.[citation needed] For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via Dynamic Host Configuration Protocol, or are configured.[citation needed] Domain Name Systems convert user-inputted domain names (e.g. "en.wikipedia.org") into IP addresses.[citation needed] Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted. Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries began to urge all resource managers to plan rapid adoption and conversion. By design, IPv6 is not directly interoperable with IPv4. Instead, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities exist for internetworking, and some nodes have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol.[citation needed] Network infrastructure, however, has been lagging in this development.[citation needed] A subnet or subnetwork is a logical subdivision of an IP network.: 1, 16 Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.[citation needed] The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 198.51.100.0/24 is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range 198.51.100.0 to 198.51.100.255 belong to this network. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.[citation needed] For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the subnet mask for the prefix 198.51.100.0/24.[citation needed] Computers and routers use routing tables in their operating system to forward IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.[citation needed] The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. Security Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information. Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users.[citation needed] Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale. Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. Malware can be designed to evade antivirus software detection algorithms. The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Under the Act, all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic.[d] The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software were allegedly installed by German Siemens AG and Finnish Nokia. Some governments, such as those of Myanmar, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive specific on individual computers or networks in order to limit access by children to pornographic material or depiction of violence.[citation needed] Performance As the Internet is a heterogeneous network, its physical characteristics, including, for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization. PB per monthYear020,00040,00060,00080,000100,000120,000140,000199019952000200520102015Petabytes per monthGlobal Internet Traffic Volume The volume of Internet traffic is difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.[citation needed] An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis. In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer-reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files. See also Notes References Sources Further reading External links |
======================================== |
[SOURCE: https://techcrunch.com/2025/02/28/tech-layoffs-2024-list/] | [TOKENS: 8029] |
Save up to $680 on your pass with Super Early Bird rates. REGISTER NOW. Save up to $680 on your Disrupt 2026 pass. Ends February 27. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us A comprehensive list of 2025 tech layoffs The tech layoff wave is still kicking in 2025. Last year saw more than 150,000 job cuts across 549 companies, according to independent layoffs tracker Layoffs.fyi. So far this year, more than 22,000 workers have been the victim of reductions across the tech industry, with a staggering 16,084 cuts taking place in February alone. We’re tracking layoffs in the tech industry in 2025 so you can see the trajectory of the cutbacks and understand the impact on innovation across all types of companies. As businesses continue to embrace AI and automation, this tracker serves as a reminder of the human impact of layoffs — and what could be at stake with increased innovation. Below you’ll find a comprehensive list of all the known tech layoffs that have occurred in 2025, which will be updated regularly. If you have a tip on a layoff, contact us here. If you prefer to remain anonymous, you can contact us here. December Is winding down its autonomous mobile robot (AMR) business, which was built after it acquired Fetch Robotics in 2021, per a report. The Illinois-based company is now considering whether to sell the AMR unit or shut it down, with most employees expected to leave by the end of 2025. Is cutting 84 jobs in Seattle and Bellevue in its latest round of layoffs, according to several media reports. The reductions affect roles across engineering, recruiting, software development, and product management. The layoffs are scheduled for February 2 through February 23, 2026, and affected employees will receive at least 90 days of pay, benefits, and job transition assistance. Is laying off 8% of its workforce, about 24 employees, as part of a restructuring to reallocate resources toward new growth areas, not broad cost-cutting, per a report from CTech. The Israeli sales intelligence startup will continue hiring for key roles while focusing its product on future market needs. Has reportedly cut 7.5% of its workforce, reducing its headcount to about 1,000, as it reshapes teams and skills rather than responding to financial pressure. The Santa Clara-based AI chip startup is shifting its focus from enterprise customers to individual developers, while continuing work on its chiplet-based roadmap and broader software and model support. Will let go of about 30 employees in Israel and a similar number of staff overseas, bringing the total reduction to roughly 6% of its global workforce. Laid off 24 employees as part of a restructuring to refocus on tools for professional photographers. In an internal memo seen by TechCrunch, CEO Eric Wittman said that consumer demand fell short and recent expansion efforts hadn’t delivered as hoped. Is reportedly cutting 200 employees, about 4% of its global workforce. With over 3,000 of its 4,300 employees based in Israel, most of the cuts will affect its local teams. Shut down on December 1, according to an audio recording obtained by Axios Pro. The hospital-at-home startup had raised more than $50 million. November The company continued with its stated goal of cutting a significant amount of its workforce this year, with 59 Bay Area jobs eliminated effective November 30, in a Employment Development Department filing caught by KRON4. Is reportedly set to cut 4,000 to 6,000 jobs worldwide by 2028 as it looks to streamline operations and leverage AI to speed up product development and boost efficiency. Is cutting several sales positions handling accounts ranging from business and schools to government agencies, as it moves to streamline how it sells devices and services to businesses, schools, and government agencies, Bloomberg reports. Told employees it may lay off more than 100 workers or even shut down, according to an internal memo obtained by TechCrunch. This comes after weeks of staff cuts across the autonomous electric tractor startup’s California offices and its teams in India and Singapore. Announced plans to lay off about 20% of its workforce, 700 to 800 employees, next month, marking its fifth round of cuts since 2022, according to Calcalist. The Nasdaq-listed gaming company, valued at $1.5 billion, employs about 3,500 people. Has laid off about 200 employees, roughly half its workforce, per Fintech Business Weekly. The revenue-based small business lender, once valued at $2 billion, said the cuts are part of its push toward profitability and greater operational efficiency. Plans to cut roughly 10% of its workforce and close several sites as part of a restructuring tied to its recent acquisition of Ansys, The Wall Street Journal reported. The layoffs, which are expected to affect about 2,000 employees, are scheduled to take place during fiscal 2026, which began November 1. Has laid off between 60 and 80 employees, citing artificial intelligence as one of the factors behind the decision, TechCrunch reported. The cybersecurity firm, which builds an AI-powered threat detection and response platform, employs roughly 250 people. Is reportedly cutting roughly 10% of its staff, notifying employees in early November that about 100 of its 900 workers will be laid off. The New York–based cybersecurity firm says the move aims to streamline operations. Is set to permanently close its local operations, laying off all 141 employees in two waves, according to a filing with the Florida Department of Commerce. The Florida-headquartered fintech company’s first 100 employees were let go on October 31, with the remaining 41 slated for termination by December 31. Is removing 52 positions at its San Jose campus, according to reporting from the San Francisco Chronicle. The layoffs, which began last month and will continue through November, affect employees across cloud development, engineering, and product management. October After Reuters reported that the company was planning to eliminate up to 30,000 corporate jobs, amounting to roughly 10% of its 350,000 employees in their corporate departments, Amazon shared that it would pursue an “overall reduction in our corporate workforce of approximately 14,000 roles.” Since that news broke, Amazon has laid off 660 employees across multiple New York City offices, with more to come through the year. Is cutting 600 jobs, about 4% of its workforce, amid an EV market pullback, marking its third layoff this year. Details of the latest layoffs remain undisclosed, while earlier cuts in June and September affected 100 to 150 employees in its commercial and manufacturing teams. Has laid off approximately 600 employees across its AI infrastructure units, including the Fundamental AI Research (FAIR) team and other product-related roles. However, top-tier AI hires in TBD Labs, managed by new chief AI officer Alexandr Wang, will not be affected. Plans to cut about 4% of its workforce, or roughly 1,400 jobs, to streamline operations amid tighter U.S. semiconductor export controls. Laid off around 100 employees in October, about 15% of its 650-person U.S. workforce. The layoffs affected various roles across its recruiting business vertical. The San Francisco-based startup is an online platform connecting college students and recent graduates with employers for early-career jobs. Has reportedly laid off over 120 employees amid a leadership transition following CEO Mark Mader’s retirement. The enterprise software company, which grew to more than 3,300 employees, was acquired for $8.4 billion by Blackstone and Vista Equity Partners earlier this year, taking it private. Has cut over 100 design roles in its cloud division, hitting U.S.-based teams especially hard, as the company shifts focus toward AI investments, per a CNBC report. Many affected employees have until early December to find a new role within Google, following additional layoffs across its Silicon Valley offices, including at least 50 permanent cuts in Sunnyvale. Is reportedly laying off over 500 employees due to AI and automation improving back-office efficiencies. The Oklahoma City-based HR and payroll software company will provide affected workers with severance packages, outplacement services, and access to internal job opportunities. September Will eliminate around 450 jobs as part of a cost and operations review, according to Reuters. The layoffs will span multiple functions and countries, including customer service and sales. Europe’s largest food delivery company said it is increasingly using automation and AI, shifting many manual service tasks to automated systems. Plans to cut around 250 jobs, approximately 30% of its workforce, as part of a push to become a leaner, faster, and AI-focused company, according to The Wall Street Journal. The Tel Aviv-headquartered freelance services marketplace said the restructuring will reduce management layers and position it to pursue growth with an AI-native approach. Is closing its Tel Aviv development center, cutting about 80 jobs. Led by Yosi Taguri, the office specialized in software, data, and AI research, including algorithm development. The California-based recruitment firm, founded in 2010, is trimming costs amid a challenging labor market. Has laid off at least 100 employees, including junior developers, just months after cutting nearly 200 jobs. The San Francisco-based conversational AI company, which is preparing for an IPO within two years, raised $60 million in equity and debt in July. Laid off about a third of its data annotation team, cutting roughly 500 jobs, according to Business Insider. The move comes as the company shifts focus from generalist AI tutors to specialist roles, after testing workers to assess their strengths. Employees were told they’ll be paid through the end of their contracts — or November 30 at the latest — but their system access was cut immediately, Business Insider reports. Has reportedly laid off about 200 workers, or 1.5% of its staff, as the company braces for the end of federal EV tax credits under President Trump’s policy changes. The $7,500 incentive for new electric cars expires this month, adding to pressure from cooling demand. Despite the cuts, Rivian says it’s moving ahead with plans for a lower-cost model. Is cutting another 101 jobs in Seattle and 254 in San Francisco, just weeks after a wave of layoffs in August. The company, which had about 3,900 local employees before the cuts, hasn’t explained the move and declined to comment. Is trimming another 262 jobs at its San Francisco headquarters, according to a state filing, with layoffs set to take effect November 3. The move comes just weeks after CEO Marc Benioff touted AI’s potential to cut customer support roles and follows a smaller round of cuts in Seattle and Bellevue earlier this month. August Will eliminate 221 positions across its Milpitas and San Francisco offices, including 157 in Santa Clara County and 64 in San Francisco, effective October 13, according to filings with California’s Employment Development Department reported by the San Francisco Chronicle. The cuts are part of the company’s broader workforce-reduction strategy. Laid off about 100 employees last month, around 9% of its workforce, after falling short of ambitious growth targets. The cuts affected staff across all departments. The company provides back-office software for restaurant chains. Is set to cut 101 jobs at its Santa Clara location, with notices issued on August 13 and terminations effective October 13. The company, which recently disclosed nearly 200 layoffs at its Pleasanton and Redwood City offices, is also planning to lay off 161 employees in Seattle, according to filings with the Washington state Employment Security Department. Is cutting 106 positions at its Seattle and Liberty Lake, Washington, offices, according to a state Employment Security Department filing. The layoffs, which affected senior engineers and managers, are part of a broader global workforce reduction, although the security and application delivery company has not disclosed the total number of employees affected. Will cut 6% of its workforce in its sixth layoff in just over a year. Peloton CEO Peter Stern said the cuts are needed to improve long-term business health. Is cutting 10% of its workforce, or about 70 employees, as part of a cost-saving effort to reduce operating expenses by $8.5 million, marking its third round of layoffs since 2022. The corporate video software company plans to maintain and gradually grow its sales and marketing budgets, driven by a robust pipeline and growing adoption of its AI-powered offerings. Is laying off about 200 employees, roughly 34% of its global workforce, as it shuts down its email and SMS marketing operations. The Israeli-founded unicorn is partnering with Attentive and Omnisend to continue supporting marketing services while investing in AI-powered tools like automated review summaries, smart sorting, and a new Loyalty Tiers system. Laid off 30 employees and is now offering buyouts to the remaining 200. The AI coding startup recently acquired by Cognition has had a rocky stretch, including a near-acquisition by OpenAI and a reverse-acqui-hire by Google that saw key talent depart before Cognition stepped in. Despite initial promises to value Windsurf’s team, the deal now looks more focused on the startup’s intellectual property than its people. Is cutting 100 jobs, and its CEO, Jen Sargent, is departing. Amazon is reorganizing its audio operations, moving Wondery’s audio-only podcasts under Audible and placing video-focused shows into a new Creator Services division. Amazon acquired Wondery in 2020. July Has cut 150 roles in customer service and support, following enhancements to its platform and tools that have significantly reduced support needs. The decision came via a prerecorded message from CEO Mike Cannon-Brookes, just hours before co-founder Scott Farquhar urged Australia to embrace an “AI revolution” and move beyond “jobs of the past” in an Australian Press Club address. The Australian software firm was founded 2002. Is cutting about 7% of its workforce, or 47 employees, as part of a push toward profitability, Bloomberg reports. The decision follows the recent acquisition of a startup with around 30 staff, who will stay on with the company. Despite the cuts, the blockchain software company that operates the popular digital wallet MetaMask says it will continue hiring for select roles. Is shutting down operations, per a report by Business Insider. The social collaging platform aimed at creators was founded in 2019 and raised $9 million in funding. Its closure highlights the persistent challenges social media startups face in building user bases and achieving long-term growth. Is laying off around 200 employees — roughly 14% of its workforce — and severing ties with 500 global contractors. The cuts come just weeks after Meta brought in the data-labeling startup’s CEO in a $14.3 billion deal. Plans to cut more than 100 U.S. full-time jobs, about 3% of its workforce, including positions at its Morrisville, North Carolina, campus. As of February 2024, the PC maker employed around 5,100 workers in the U.S. Is reportedly planning to lay off nearly 2,400 workers in Oregon, which is almost five times more than what was announced earlier this week. Last week, Intel announced that it will lay off more than 500 employees in Oregon, which is about 20% of its workforce, per Bloomberg. Plan to eliminate approximately 1,300 jobs combined as part of a larger restructuring effort to combine their operations and focus on AI. The layoff will mostly affect employees in the U.S., particularly in the R&D, HR, and sustainability teams, according to an internal memo by Hisayuki “Deko” Idekoba, the CEO of Recruit Holdings, which is the Japanese parent company of Indeed and Glassdoor. Has laid off 29 employees as part of its reorganization, per a report by Blockworks. The Seattle-based research and engineering startup recently launched EigenCloud, a platform that provides blockchain-level trust guarantees for any Web 2.0 or web3 application. The reduction will affect 25% of the company’s workforce. Eigen Labs said it had raised $70 million in tokens from a16z Crypto in June. Will cut 9,000 employees, which is less than 4% of its global workforce across teams, role types, and geographies. The reduction follows a series of layoffs earlier this year: It cut less than 1% of the headcount in January, more than 6,000 in May, and at least 300 in June. Is laying off 65 employees in Bellevue, Washington, according to media reports. The parent company of TikTok arrived in Seattle in 2021 and has been expanding its presence there by growing its TikTok Shop online shopping division. June Announced on June 30 that the company is cutting 300 jobs, or 10% of its workforce, as part of organizational restructuring within its sales and support divisions amid the AI shift. The startup is an Amsterdam-based location tech startup that provides navigation and mapping products. Has reduced its headcount by approximately 140 employees, accounting for roughly 1% of its total workforce. The recent layoffs mostly affected Rivian’s manufacturing team. Announced in an SEC filing that it will cut approximately 240 jobs, or 30% of its workforce, to enhance operational efficiency and allocate the resulting savings to the development of new products and technologies, according to a CNBC report. The layoff will help the online dating app save $40 million annually, per the report. Has reportedly laid off 85 employees, which accounts for approximately 40% of its workforce. The Vancouver-based startup sells software products that use artificial intelligence for business intelligence. It helps sales professionals at tech companies gather information on competitors to improve their sales. Has downsized its smart TV division by 25% of its 300-member team to adjust its strategy, per reports. Funding for the smart TV division, including Google TV and Android TV, has been cut by 10%, but investment in AI projects has been raised. Says that it plans to lay off 15% to 20% of workers in its Intel Foundry division starting in July. Intel Foundry designs, manufactures, and packages semiconductors for external clients. Intel’s total workforce was 108,900 people as of December 2024, according to the company’s annual regulatory filing. It also confirmed to TechCrunch that it plans to wind down its auto business. Announced that it is letting go of around 90 employees, with 40 in Israel and 50 in Poland. The most recent round of job cuts comes after the Israel-based gaming company laid off 50 employees a few weeks ago. Has let go of around 25 employees from the 58-person team, the company confirmed to TechCrunch. Evernote’s founder Phil Libin launched the video startup in 2020, offering Airtime Creator and Airtime Camera. Is laying off more employees, just a few weeks after announcing a job cut of over 6,500 in May, which was around 3% of its global workforce. The most recent layoffs affected software engineers, product managers, technical program managers, marketers, and legal counsels. May Plans to downsize its workforce by letting go of 68 employees, approximately 4% of its total staff, per Reuters. The San Francisco telehealth platform said that its layoffs were unrelated to a U.S. ban on producing large quantities of the weight-loss drug Wegovy. The startup said it intends to keep on recruiting employees who fit in with its long-term expansion plans. Is reportedly laying off around 100 employees from its devices and services division, which encompasses various businesses like the Alexa voice assistant, Echo smart speakers, Ring video doorbells, and Zoox robotaxis. The company has reduced its workforce by approximately 27,000 since the start of 2022 to cut costs. Will cut over 6,500 jobs, affecting 3% of its worldwide workforce. As of June, the Seattle-headquartered company had a total of 228,000 employees globally. It would be one of the company’s biggest layoffs since it cut 10,000 employees in 2023. Reportedly plans to let go of 248 employees, or about 22% of its workforce, to reduce expenses and improve efficiency, it said. The San Francisco-based edtech startup, which offers textbook rentals and tutoring services, has seen a drop in web traffic for months as students opt for AI tools instead of traditional edtech platforms. Is reducing its workforce by 13% as part of a reorganization that aims to reduce costs, shore up margins, and streamline its organizational structure. Is laying off 5% of its global workforce, or around 500 people. The company said the layoffs were part of “a strategic plan (the ‘Plan’) to evolve its operations to yield greater efficiencies as the Company continues to scale its business with focus and discipline to meet its goal of $10 billion in ending [Annual Recurring Revenue]” in its 8-K filing. Has cut roughly 25% of its current workforce. The Vancouver-based company, which is developing a technology to generate fusion energy, has raised $440 million from investors, including Jeff Bezos, Temasek, and BDC Capital. Reduced its headcount by 20 employees, accounting for 10% of its total workforce. In April 2023, the Israeli cybersecurity startup had previously laid off a similar number of employees during a round of layoffs. Has shut down its operations months after announcing major expansion plans, per Sifted. The British climate startup has let go of approximately 200 employees, according to a LinkedIn post by James Reynolds, the head of talent. April Is reportedly eliminating 700 jobs, affecting 6% of its total workforce, as it reorganizes for its operational efficiency. The company, based in San Francisco, provides data storage, cloud services, and CloudOps solutions for businesses. Is reportedly letting go of approximately 300 to 400 employees, including around 100 at Respawn Entertainment, to focus on its “long-term strategic priorities,” according to Bloomberg. Is laying off around 3% of its employees as part of its restructuring. The job cuts will mainly affect midlevel positions in the product and technology teams. The latest round of layoffs comes after the company let go of hundreds of employees from its marketing team globally in early March. Has reduced its workforce by about 200 employees in its product and technology divisions as part of a restructuring measure. The India-based e-commerce platform for pre-owned vehicles provides a range of services like buying and selling pre-owned cars, financing, insurance, driver-on-demand, and more. In 2023, the SoftBank-backed startup raised $450 million at a valuation of $3.3 billion. Is letting go of over 100 employees in its Reality Labs division, which manages virtual reality and wearable technology, according to The Verge. The job cuts affect employees developing VR experiences for Meta’s Quest headsets and staff working on hardware operations to streamline similar work between the two teams. Announced its plan to lay off more than 21,000 employees, or roughly 20% of its workforce, in April. The move comes ahead of Intel’s Q1 earnings call helmed by recently appointed CEO Lip-Bu Tan, who took over from longtime chief Pat Gelsinger last year. Is laying off 200 people at its Factory Zero in Detroit and Hamtramck facility in Michigan, which produces GM’s electric vehicles. The cuts come amid the EV slowdown and is not caused by tariffs, according to a report. Has reportedly let go of around 100 employees since the start of 2025. Earlier this week, about 50 employees from the tech and product teams were let go in the latest round of job cuts. The India-based insurtech startup has raised a total of $125 million to date. Will reduce its workforce by 150 positions following its decision not to proceed with its IPO, per Bloomberg. The San Francisco-based car rental startup, which had about 1,000 staff in 2024, said the layoffs will bolster its long-term growth plans during economic uncertainty. Laid off roughly 200 employees to improve efficiency and profitability. It’s the startup’s second round of layoffs in five months, following the job cuts of around 300 employees in December. The conversational AI company, backed by Tiger Global and Fidelity, was last valued at $1.4 billion in 2021. The startup is based in San Francisco and operates in India. Has reportedly eliminated 200 jobs, affecting around one-third of its employees. The German logistics startup reduced a significant number of sales staff. Will stop its operations in China, affecting around 2,000 employees. The move came after Microsoft decided to end outsourcing after-sales support to Wicresoft amid increasing trade tensions. Wicresoft, Microsoft’s first joint venture in China, was founded in 2022 and operates in the U.S., Europe, and Japan. It has over 10,000 employees. Plans to cut 123 jobs, affecting about 4% of its workforce, according to a report by MarketWatch. The software company prioritizes key strategic areas like artificial intelligence for profitable growth. Has laid off hundreds of employees in its platforms and devices division, which covers Android, Pixel phones, the Chrome browser, and more, according to The Information. Is contemplating additional layoffs that could happen by May, Business Insider reported, citing anonymous sources. The company is said to be discussing reducing the number of middle managers and non-coders in a bid to increase the ratio of programmers to product managers. The WordPress.com developer is laying off 16% of its workforce across departments. Before the layoffs, the company’s website showed it had 1,744 employees, so more than 270 staff may have been laid off. Has let go of 10 to 12 technical writers approximately nine months after telling its employees to use generative AI tools wherever possible. The company, which had around 5,500 staff in 2024, was valued at $26 billion after a secondary stock sale in 2024. March Has laid off 2,800 employees, affecting 62% of its total staff. The layoffs come weeks after the embattled Swedish battery maker filed for bankruptcy. Let go of 931 employees, around 8% of its workforce, as part of a reorganization, according to an internal email seen by TechCrunch. Jack Dorsey, the co-founder and CEO of the fintech company, wrote in the email that the layoffs were not for financial reasons or to replace workers with AI. Has laid off 198 employees, who make up about two-thirds of its U.S. workforce, per a media report. The layoff comes a month after the company was acquired by Bending Spoons, an Italian app developer, for $233 million. Brightcove had 600 employees worldwide, with 300 in the U.S., as of December 2023. Has reportedly laid off 130 employees, or 3.5% of its total workforce of 3,700 people. Acxiom is owned by IPG, and the news comes just a day after IPG and Omnicom Group shareholders approved the companies’ potential merger. Plans to close its office in Washington, D.C., and let go of its policy team there by the end of March, TechCrunch has confirmed. Sequoia opened its Washington office five years ago to deepen its relationship with policymakers. Three full-time employees are expected to be affected, per Forbes. Announced plans to let go of approximately 5,600 jobs globally in its automation and electric-vehicle charging businesses as part of efforts to improve competitiveness. Is reportedly laying off 273 employees, closing its distribution center in Grand Prairie, Texas, and consolidating to another site in Irving to manage the volume in the region. Has cut 45 employees, more than half of its workforce, after being acquired by cybersecurity company Armis for $120 million in March. Will reportedly reduce 22 employees, representing 7% of its workforce. Most of those affected are based in Israel as the company undergoes a streamlining process. The New York- and Tel Aviv-headquartered cybersecurity firm has raised $100 million at a valuation of about $500 million in 2021. Will cut 22 jobs, affecting nearly a quarter of its total workforce, following the announcement of the AI startup’s strategic partnership with Microsoft. Announced it will be shutting down several of its offices in accordance with Elon Musk’s DOGE, including its Office of Technology, Policy, and Strategy and the DEI branch in the Office of Diversity and Equal Opportunity. Has reportedly laid off some staff, according to LinkedIn posts from ex-employees. The company has not confirmed the layoffs, and it is currently unknown how many workers were affected. Announced plans to let go of 340 employees in its technology division as part of a new restructuring effort. Will cut 2,500 employees, or 5% of its total staff, in response to its shares sliding 19% in the first fiscal quarter. Will cut up to 300 workers in Dublin, accounting for roughly 10% of the company’s workforce in Ireland. Announced it will lay off 65 employees, affecting 5% of its total workforce. Is reportedly set to lay off over 1,000 employees and contractors in a cost-cutting effort. It’s the second round of cuts for the company in just five months. Reduced its total headcount by 16% as the gaming startup shifts its focus to be “scrappier” and “more efficient.” Was shut down just three years after it was acquired by Flipkart. It is currently unknown how many employees were affected. February Will cut up to 2,000 jobs as part of its “Future Now” restructuring plan that hopes to save the company $300 million before the end of its fiscal year. Announced 500 job cuts after it was sold to Wonder Group for $650 million. The number of cuts affected more than 20% of its previous workforce. Announced plans to lay off 1,350 employees, affecting 9% of its total workforce, in an attempt to reshape its GTM model. The company is also making reductions in its facilities, though it does not plan to close any offices. Is planning to cut employees in its People Operations and cloud organizations teams in a new reorganization effort. The company is offering a voluntary exit program to U.S.-based People Operations employees. Reduced its headcount by 25 employees, accounting for 16% of its total workforce. The company is planning to release a commercial version of its proteome analysis platform in 2026. Will reportedly cut a few dozen employees in Israel, potentially affecting 10% of its 250-person workforce in the country. Cut 1,100 jobs in a reorganizing effort that affected its tech workers. The coffee chain will now outsource some tech work to third-party employees. Laid off dozens of employees over the last few weeks, including around 10% of staff in one day, after failing to meet its sales growth targets. The “headless commerce” platform raised money at a $1.9 billion valuation just a few years ago. Will cut roughly 5% of its current workforce in a new efficiency drive to increase profitability and growth. Laid off more employees in a new effort to cut costs, though the total number is unknown. Last year, the travel giant cut about 1,500 roles in its Product & Technology division. Has ceased operations and has laid off its employees after selling its business and technology to Israeli cybersecurity company Tufin. The cuts affect roughly 300 people. Is shutting down its operations after shifting from a brick-and-mortar model to a fully virtual women’s healthcare provider. The startup, which raised $18 million in 2023, has not disclosed how many employees are affected, saying recent layoffs were tied to its former in-person business. Cut 51 jobs in its San Francisco headquarters, according to state filings with the Employment Development Department. The SaaS startup previously reduced its headcount by 8% in 2023. Has cut 120 employees, affecting 44% of its total staff. It’s the Y Combinator-backed Nigerian startup’s second layoff round in just five months. Reportedly laid off dozens of employees as part of a new cost-cutting effort that aims to ensure “long-term success” in the startup’s mission to curb misinformation online. Will lay off about 10% of its workforce, affecting more than 1,000 employees. According to an email to staff obtained by CNN, the cuts will largely have an impact on positions in engineering and program management. Announced in an SEC filing that it will cut around 450 positions between February and July 2025, with a complete restructuring set to be completed in the fall, following its new partnership with Zillow. Is laying off 6% of its total workforce, the cybersecurity firm confirmed to TechCrunch. The cuts come less than two weeks after Sophos acquired Secureworks for $859 million. Will cut nearly 200 employees as it introduces redundancy measures and closes down its operations in Poland and Kenya. Reportedly conducted another round of layoffs. It’s unknown how many employees were affected. Cut nearly 200 employees, CEO Mike Seckler announced in a note to employees, citing “potential adverse events” like a recession or rising interest rates. Cut 120 jobs, affecting roughly one-third of its total workforce, TechCrunch exclusively learned. The move comes just a year after the Dutch startup cut 90 employees following its rebrand. Laid off about 500 employees, affecting 15% of its workforce, citing poor business performance. The new cuts follow two earlier layoff rounds for the company that affected roughly 200 employees. Reportedly let go of approximately 200 employees, according to The Verge. The company previously cut 100 employees as part of a layoff round in August 2024. Laid off 1,750 employees, as originally reported by Bloomberg and confirmed independently by TechCrunch. The cuts affect roughly 8.5% of the enterprise HR platform’s total headcount. Laid off 180 employees, the company confirmed to TechCrunch. The cuts come just over one year after the access and identity management giant let go of 400 workers. Is laying off 50% of its workforce, including CEO Marc Whitten and several other top executives, as it prepares to shut down operations. What remains of the autonomous vehicle company will move under General Motors. Is reportedly eliminating more than 1,000 jobs. The cuts come as the giant is actively recruiting and hiring workers to sell new AI products. January Has shut down operations, CEO Paul Kesserwani announced on LinkedIn. The fintech startup’s post-money valuation in 2022 was $82.4 million, according to PitchBook. Laid off 150 employees based in the U.S., affecting roughly 18% of its total workforce, in an effort to reach profitability. Laid off dozens of workers in its communications department in order to help the company “move faster, increase ownership, strengthen our culture, and bring teams closer to customers.” Is laying off 300 people, according to a leaked memo reported by Business Insider. However, according to the memo, the fintech giant is planning to grow its total headcount by 17%. Laid off 15 employees as the augmented writing startup undergoes a restructuring effort. Is cutting 75 employees in an effort to “ensure the long-term sustainability and success” of the company. The audio company last cut 200 writers in July 2024 months after partnering with ElevenLabs. Is planning to cut 58 employees in response to an “ongoing macroeconomic challenges and continued uncertainty in the solar industry.” Announced in an internal memo that it will cut 5% of its staff targeting “low performers” as the company prepares for “an intense year.” As of its latest quarterly report, Meta currently has more than 72,000 employees. Will cut up to 730 jobs, affecting 3% of its total workforce, as it plans to exit operations in Germany and focus on physical retailers. Is shutting down its operations, affecting 63 employees. The delivery startup said employees will be paid through January 15 without severance. Is laying off 114 employees as part of a team realignment, per a new WARN notice filing, focusing its efforts on a robotic printing system. Eliminated 37 jobs, affecting roughly 10% of its total workforce, even as the company pursues “aggressive” hiring. Is cutting dozens of employees across its global markets as part of a strategic reorganization to increase profitability. Plans to lay off 400 employees globally. It’s the company’s fourth layoff round since January 2024 as the solar industry as a whole faces a downturn. The fintech startup, founded in 2018, abruptly shut down earlier this year. Per an email from CEO Paul Aaron, the closure follows an unsuccessful attempt to find a buyer, though Employer.com has a new offer under consideration to acquire the company post-shutdown. This list updates regularly. On April 24, 2025, we corrected the number of layoffs that happened in March. Topics Reporter, Asia Kate Park is a reporter at TechCrunch, with a focus on technology, startups and venture capital in Asia. She previously was a financial journalist at Mergermarket covering M&A, private equity and venture capital. Audience Development Manager Alyssa Stringer was formerly the Audience Development Manager for TechCrunch. She previously worked for HW Media as Audience Development Manager across HousingWire, RealTrends and FinLedger media brands. Prior to her experience in audience development, Alyssa worked as a content writer and holds a Bachelor’s in Journalism at the University of North Texas. Save up to $680 on your pass before February 27.Meet investors. Discover your next portfolio company. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building what’s next. Don’t miss these one-time savings. Most Popular FBI says ATM ‘jackpotting’ attacks are on the rise, and netting hackers millions in stolen cash Meta’s own research found parental supervision doesn’t really help curb teens’ compulsive social media use How Ricursive Intelligence raised $335M at a $4B valuation in 4 months After all the hype, some AI experts don’t think OpenClaw is all that exciting OpenClaw creator Peter Steinberger joins OpenAI Hollywood isn’t happy about the new Seedance 2.0 video generator The great computer science exodus (and where students are going instead) © 2025 TechCrunch Media LLC. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Futhark_(programming_language)] | [TOKENS: 397] |
Contents Futhark (programming language) Futhark is a multi-paradigm, high-level, functional, data parallel, array programming language. It is a dialect of the language ML, originally developed at UCPH Department of Computer Science (DIKU) as part of the HIPERFIT project. It focuses on enabling data parallel programs written in a functional style to be executed with high performance on massively parallel hardware, especially graphics processing units (GPUs). Futhark is strongly inspired by NESL, and its implementation uses a variant of the flattening transformation, but imposes constraints on how parallelism can be expressed in order to enable more aggressive compiler optimisations. In particular, irregular nested data parallelism is not supported. It is free and open-source software released under an ISC license. Overview Futhark is a language in the ML family, with an indentation-insensitive syntax derived from OCaml, Standard ML, and Haskell. The type system is based on a Hindley–Milner type system with a variety of extensions, such as uniqueness types and size-dependent types. Futhark is not intended as a general-purpose programming language for writing full applications, but is instead focused on writing compute kernels (not always the same as a GPU kernel) which are then invoked from applications written in conventional languages. Futhark is named after the first six letters of the Runic alphabet.: 2 Examples The following program computes the dot product of two vectors containing double-precision numbers. It can also be equivalently written with explicit type annotations as follows. This makes the size-dependent types explicit: this function can only be invoked with two arrays of the same size, and the type checker will reject any program where this cannot be statically determined. The following program performs matrix multiplication, using the definition of dot product above. This shows how the types enforce that the function is only invoked with matrices of compatible size. Also, it is an example of nested data parallelism. References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Igbo_Jews] | [TOKENS: 1391] |
Contents Igbo Jews Igbo Jews are members of the Igbo people of Nigeria who practice Judaism. It is a tenet of their beliefs that they have ties to one of the lost tribes of Israel, the tribe of Gad. Jews have been documented in parts of Nigeria since the precolonial period, but it is not known for the Igbo to have claimed Israelite descent or practiced Judaism prior to colonial times. Significant Igbo identification with Jews concretized during and after the Biafran War (1967–1970). No formal census has been taken in the region and the precise number of Igbo in Nigeria who practice Judaism is not known. An estimated 30,000 Igbo Jews, having at least 26 synagogues of various sizes, were said to exist in 2008. In 2021 there were said to be approximately 12,000-15,000 practicing Igbo Jews in Nigeria, comprising some 70 active communities. A more conservative figure of at least 2,000-3,000 Igbo practicing Judaism, and at most 5,000, has also been given. Historical scrutiny An early and widely influential statement from Olaudah Equiano, a Christian-educated Igbo man and freed slave, suggested a Jewish migratory origin for the Igbo. He speculated in his autobiography of 1789 on the strong analogy which ... appears to prevail in the manners and customs of my countrymen and those of the Jews, before they reached the Land of Promise, and particularly the patriarchs while they were yet in that pastoral state which is described in Genesis—an analogy, which alone would induce me to think that the one people had sprung from the other. Critical historians have reviewed the literature on West Africa that was published during the nineteenth and early twentieth centuries. They have clarified the diverse functions that such histories served for the writers who proposed them at various times in the colonial and post-colonial past. Though there is no doubt that Jews were present in Saharan trade centers during the first millennium AD, there is no evidence that Igbo people had contemporaneous contact with historical Jewish populations, or that they had at any point adopted or practiced Judaism prior to colonization by the European powers. Religious practices The religious practices of the Igbo Jews include circumcision eight days after the birth of a male child, the observance of kosher dietary laws, the separation of men and women during menstruation, the wearing of the tallit and kippah, and the celebration of holidays such as Rosh Hashanah, Yom Kippur, Hanukkah, and Purim. Contemporary outreach Certain Nigerian communities with Judaic practices have received help from individual Israelis and American Jews who work in Nigeria, outreach organizations like the American Kulanu, and Black Hebrew Israelite communities in America. Rabbi Howard Gorin visited the community in 2006 and members of his synagogue, "Tikvat Israel" in Rockville, Maryland, USA, supported those in Nigeria by sending books, computers, and religious articles. In addition to Rabbi Howard Gorin, visitors have included Dr. Daniel Lis, Professor William F. S. Miles, filmmaker Jeff L. Lieberman, and the American writer Shai Afsai. In 2013 Shai Afsai invited two Igbo Jewish leaders, Elder Ovadiah Agbai and Prince Azuka (Pinchas) Ogbukaa of Abuja's Gihon Hebrew Synagogue, to Rhode Island in the United States. The visit of the two men led Rabbi Barry Dolinger of Rhode Island to go to Nigeria with Afsai in 2014. A main concern of Igbo Jews has been how to be part of the wider Jewish world. According to Elder Pinchas (Azuka) Ogbukaa, spokesman of Abuja's Gihon Synagogue, the "greatest of all the challenges we are facing is that of isolation." Igbo Jews in Israel Over the past few decades, several Igbo have immigrated to Israel, particularly to Tel Aviv. This wave of immigration can partially be explained by a small diaspora that was established in Israel when Nigeria was granted independence in 1960. This is partially due to comprehensive educational programs that the Israelis implemented in the new Nigerian state after the 1960s, programs that familiarized many people with the idea of Israel as a modern nation state for the first time, and the possible opportunities that existed for Jewish people who lived there. The Igbo Jewish community is not recognized as a Jewish community for the purpose of immigration to Israel by Israel's Supreme Court. Additionally, none of the mainstream denominations of Judaism consider the group an authentically Jewish community. Indeed, while they identify themselves as being a part of the worldwide Jewish community, they are still struggling to be recognized as Jews by Jews. An affiliate of Gihon Hebrews' Synagogue expressed this struggle to Shai Afsai in Abuja: "We say we are Jews from blood. We are now excluded; we cannot go and participate as Jews in any place. I make an appeal that we be recognized, not excluded and isolated from other Jews." However, some Igbo Jews are currently adopting more rigorous religious customs, in order to gain more acceptance from the mainstream Jewish community. For instance, Daniel Lis explained in his article that parts of the Igbo Jewish community are assimilating themselves to the standards of Orthodox Judaism, so as to be universally accepted as Jews in Israel. While Igbo Jews claim that they are the descendants of the ancient Israelites, others say they lack the historical evidence which would prove their descent from such a community, and they also lack evidence of a continuous practice of Judaism which should predate colonial contact. Frustrating the possibility that the state might make such a determination, and frustrating the possibility that a Jewish denomination might recognize the entire community as an authentically Jewish one is the fact that some Igbo Jews simultaneously claim to be Christians, calling their commitment to Judaism and their claim to have a Jewish identity into question. Among them are a number of Igbo who have illegally immigrated to Israel by simultaneously claiming to be Jews and Christians. According to the official administration of Israel, a number of Igbo were granted the right to travel in Israel for the purposes of Christian pilgrimage, but they have overstayed their visas, and now they are illegally living and working in the country. The State of Israel has made no official recommendations as to whether the Igbo Jews constitute a legally recognizable Jewish community for the purposes of immigration to Israel, nor is their legal status currently being debated at any level within the state. However, several Igbo Jews who have undergone formal conversions to Orthodox or Conservative Judaism have been accepted as Jews on an individual basis under the Law of Return, and they have also immigrated to Israel. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Colonist_(The_X-Files)] | [TOKENS: 4142] |
Contents Colonist (The X-Files) The Colonists are an extraterrestrial species and are also the primary group of antagonists in the science fiction television show The X-Files as well as the first X-Files feature film. The mystery revolving around their identity and purpose is slowly revealed across the course of the series. In the series' plot, the Colonists are collaborating with a group of United States government officials known as the Syndicate in a plan to colonize the Earth, hence their name. Character arc According to the series mythology, an extraterrestrial lifeform, known in the series' mythology simply as the Colonists, were originally present on Earth in the early stages of human evolution. They highly resemble the well-known "grey aliens" in their mature form. In their immature stage, they are more yellowish-colored, tall, and very aggressive, possessing fangs, claws and scale-like texture of their skin. This immature form is a protective stage, able to viciously defend itself from birth. This outer skin is eventually shed when the alien develops into its mature form. The immature form resembles that of a reptilian extraterrestrial and is referred to as the "long-clawed" form. The aliens were forced to abandon the planet during the last ice age as their viral form is deactivated by extreme cold. Upon their departure, they left behind underground deposits of the black oil virus, in preparation for their return. The virus apparently contains the aliens' genetic blueprints, awaiting reconstitution when the master species returns to Earth. While away from Earth, the Colonists evidently sought out life throughout the universe in an effort to subdue other species and take over the universe. Purportedly, the Colonists planned on returning to Earth in the year 2012, as The Smoking Man (William B. Davis) later remarks to Fox Mulder (David Duchovny) that the ancient Mayans were so terrified that they stopped their calendar on the exact date of colonization: December 22, 2012. The Colonists eventually returned to Earth in 1947 when one of their ships crashed in New Mexico due to exposure to magnetite in the surrounding rocks. Shortly after this event, a select few power brokers, mainly in the United States and the Soviet Union (though some also came from other nations), first learned of the Colonist plot to retake the planet. These men eventually formed the Syndicate; they included the Well-Manicured Man (John Neville), The Smoking Man, and Bill Mulder (Peter Donat), among others. The Syndicate threatened to use nuclear weapons to render the Earth uninhabitable to the Colonists due to extreme cold. As such, humanity was spared immediate invasion, and The Syndicate began negotiations with the Colonists. In 1973, an alliance was created and an agreement was reached that a small group of humans would be allowed by the Colonists to survive by becoming alien-human hybrids. The date for recolonization was firmly established in 2012 so that both sides could work on creating an alien-human hybrid before the arrival of the Colonist fleet. These hybrids, if successful, would serve as a slave race for the Colonists – the Syndicate and other chosen humans would receive the hybrid genes and be spared. In exchange for the Syndicate's cooperation, the Colonists handed over alien embryos as a source of genetic material for the hybrid experiments, as well as allowing limited military use of their technology and resources (such as the Alien Bounty Hunters), and promising that the heirs of the Syndicate members, who were turned over to the Colonists as an act of good faith, would survive the takeover. Meanwhile, both sides had a secret. The human conspirators would attempt to develop a vaccine for the alien virus in an effort to save all of humanity. The Colonists pretended that mass infection of humanity by the black oil would make them a controlled slave race, but in reality the oil would give birth to new alien beings within the human hosts resulting in re-population rather than colonization. Alien factions The Bounty Hunters are a distinct group from the Colonist aliens. Although all Colonist aliens are shape-shifters, the Alien Bounty Hunters readily take the shape of humans and are tasked with policing their plans and tracking down and eliminating any threats. The bounty hunters have green blood that contains a retrovirus which, when exposed to humans, is lethal. The alien blood can burn through most surfaces like an acid, and can kill a normal human if exposed for too long to its noxious fumes. In addition, the alien's blood causes human blood to coagulate into a jelly-like substance, but its effects can be neutralized by extreme cold. The Bounty Hunters, and any member of the Colonists' race, could also only be killed by piercing a small area at the base of the neck; the bounty hunters carried a kind of alien stiletto-like weapon to assassinate rogue aliens and to destroy imperfect alien-human hybrids. Once dead, their bodies would rapidly dissolve into a pool of their severely acidic blood, which would eventually evaporate. Inconsistent with this arc, in season 8 episode 2 "Without" Scully kills the bounty hunter by shooting him with her gun. He oozes green blood, yet neither she, nor the other FBI agents who enter the room while the body is disintegrating, are affected by the green acid-like blood. The Alien Rebels, those aliens that opposed the plans of the alien Colonists, are of the same species. The distinguishing characteristic of the Rebels is that they have had all of their facial orifices sealed shut in an effort to avoid absorption of and subsequent infection by the parasitic black oil; this led to a somewhat grotesque appearance. In the latter seasons of The X-Files, the role of the Bounty Hunters on Earth has largely been taken over by the Super Soldiers, human replacements capable of withstanding incredible amounts of damage. In the two-part episodes "Colony" and "End Game", Chris Carter and Frank Spotnitz along with some help from David Duchovny created what would become a recurring character named the Alien Bounty Hunter. According to Carter, Duchovny came to him and said "wouldn't it be great if we had like an alien bounty hunter?" Carter was positive towards the idea and acted upon it. The actor Brian Thompson auditioned for the role in a casting session, where he was competing with another actor. Spotnitz and Carter did not have much time to cast this character, but they knew this casting would be important since they intended the character to become a recurring character. Thompson was chosen according to Spotnitz because he had a very "distinctive look" about him, most notably his face and mouth. After casting him, they told Thompson's agent that he needed a hair cut, because he was originally envisioned as a U.S. Air Force pilot who'd been shot down. When Thompson came to Vancouver, British Columbia, Canada there had been some "misunderstanding" between them, and he had not been told of the hair cut. So the hairstyle seen in this and every episode since was a "compromise" between Thompson and the producers. Critical reception to the Alien Bounty Hunters has been largely positive. Den of Geek also named the Alien Bounty Hunters among "The Top 10 X-Files Baddies". The review wrote positively of the Alien Bounty Hunters and described them as "the nasty minions of the colonists". The review wrote that their being written out of the series was a "shame". The site awarded the bounty hunters a "Coolness" rating of four out of five, an "Impact" rating of two out of five, and a "Creepiness" rating of two out of five. Purity, more commonly referred to as black oil, and called the "black cancer" by the Russians, is an alien virus that thrived underground on Earth, in petroleum deposits. The virus is capable of entering humanoids and assuming control of their bodies. It has sentience and is capable of communicating. It was revealed to be the "life force" of the alien colonists, which they seemingly used to reproduce their kind, as well as infect other alien races in order to conquer the universe. The Syndicate in cooperation with the alien Colonists developed a delivery mechanism that would be used to introduce the virus into an unsuspecting public upon colonization. Africanized bees, extremely aggressive, that would sting indiscriminately, would carry the black oil virus through a transgenic corn crop specifically engineered to carry the virus and to attract the bees. The bees would be released on colonization and the infected human beings would become a slave race. The Syndicate, however, secretly tried to create a vaccine to protect themselves, which they code-named "Purity Control." While the Purity Control project ultimately fails, a rival Russian shadow group was successful in developing a weak vaccine that eventually fell into the hands of the Syndicate. The plot to cooperate with the alien colonization plan was implemented with the aim of being given access to the black oil for the transgenic corn, in order to perform experiments with it in an effort to develop a vaccine. This attempt was semi-successful, as the "weak vaccine" administered to Scully while in the Antarctic alien ship was able to cure her infection and cause the entire ship to depart its underground residence. After the events of the 1998 film, the Syndicate, as well as Mulder and Scully, learned that the black oil can either take over a host's body or incubate within other life forms, including humans. Once infected with the gestational form of the black oil virus, a human host gestates the immature alien form after 96 hours, or sooner if the surrounding temperature is raised significantly, killing the host in the process. The third season episode "Piper Maru" marked the first occurrence of the black oil. The on-screen appearance of the substance was achieved through visual effects, with the shimmering oil effect being digitally placed over the actors' corneas in post-production. The crew went through various iterations to find the two "right" types of fluids. According to physical effects crewman David Gauthier, they used a mix of oil and acetone, which he believed gave the substance a more globular look. Special effects technician Mat Beck was able to digitally bend the oil effect around the shape of the actors' eyes. The season eight episode, "Vienen", marked the last appearance of the black oil in the series. Molasses and chocolate syrup were used for the visual effects of the black oil. The scene with the black oil coming out of the eyes, ears and mouth was mostly done on a visual effects stage. Due to the uncontrollable nature of the substance, it took nine takes to get the syrup to spill on the right places. Critical reception to the black oil has been largely positive. Den of Geek named the black oil and the killer bees among "The Top 10 X-Files Baddies". The review applauded the black oil's creepy nature and noted that the black oil was "central part of the larger Colonisation Plan that underpins the big story arc of the series". They awarded it a "Coolness" rating of four out of five, an "Impact" rating of three out of five, and a "Creepiness" rating of four out of five. Furthermore, Den of Geek wrote positively of the Killer Bees and wrote that "you gotta love" them. The review stated that they were "in the pantheon of 'cool shit to do in movies'" and that their presence added to the overall effect of the first movie. The site awarded the bees a "Coolness" rating of five out of five, an "Impact" rating of two out of five, and a "Creepiness" rating of two out of five. There also exists a faction of aliens who actively oppose Colonization. They are the same species as the Alien Bounty Hunter(s), free from the effects of black oil infection. The Rebels are distinguished by their grotesque appearance: the orifices on their face are morphed shut to avoid absorption of the parasitic black oil. Colonization has apparently begun in the Rebels' home environment, but members of this species are spared gestation to become Bounty Hunters for the Colonists' ongoing conquest efforts. Although enemies of the Colonists, the Rebels can also be hostile to humanity; they carry prod-like weapons that can quickly incinerate a human and do not hesitate to use them. They burn abductees with chips in their necks at abduction sites in attempts to prevent colonization from proceeding. The Rebels have a vested interest in keeping the Colonists from finding out that a successful hybrid has been created. While the Rebels had an opportunity to destroy the hybrid, Cassandra Spender, they choose to let her survive in the hope that the Syndicate will join them in fighting the Colonists. If they refuse, Cassandra can be used to expose the truth and the conspiracy. The Rebels go so far as to infiltrate the Syndicate and bring up the possibility of fighting the Colonists. However, the Syndicate decides that fighting the Colonists would be futile. At this point, a fully working vaccine has not been created, and it is therefore decided that the best thing for the Syndicate to do is to comply with the original deal and turn over the hybrid to the Colonists in the hope that they are spared the resulting takeover. Before this can be done, however, the Rebels kill all but a few members of the Syndicate in addition to Cassandra, the only living successful alien-human hybrid, before the Syndicate is able to send a signal to the Colonists. Without a successful hybrid, the timetable for the Colonist invasion will not be advanced and the date set for colonization remains December 22, 2012. Frank Spotnitz explained that, despite their tendency of killing human abductees, the rebels can really be viewed as allies of mankind. He said in the commentary of "One Son" that the faceless rebels and the human race are the only two species of advanced civilization in the galaxy that have not been consumed by the black oil. Thus, he notes that at the end of the episode "humanity has lost, Mulder has lost, and the rebels in fact save the day, ultimately by stopping the delivery of Cassandra Spender to the aliens". The initial look of the rebels was created by special effects supervisor Tony Lindala, during the production of "Patient X". Spotnitz had a problem with the visual effects used for the rebels in "One Son", going so far as to say that the effects of them landing were among the worst ever created for the show. He noted that the effects were created on short notice, saying "It was one of those cases where you just run out of time, sorry to say." Spotnitz later said that the overall production values were fantastic, but that he wished he could have changed some things about the episode; in particular, he wished to have done the scene wherein one of the Syndicate members changes into an alien rebel differently. Early attempts to create alien-human hybrids were pioneered by German and Japanese scientists shortly after World War II, and for some time during the Cold War. However, these often met with failure, and the Syndicate started to rely more on their own scientists. According to the Alien Bounty Hunter, in the 1950s, Soviet geneticists found a unique genetic anomaly within identical twins. The Colonists and Syndicate scientists used this to eventually develop human clones with alien elements and partial hybrids, but they were still ultimately inferior. Hybrids of this type include Samantha Mulder, Kurt Crawford, the Gregors, Ernest Calderon, and Dr. William Secare. Child and adult versions of Samantha and Kurt are also seen. These clones have the same caustic greenish blood of the aliens, have greater muscular strength and higher physical endurance levels than most normal humans, and can breathe underwater. In addition to their intended use by the alien colonists, the Syndicate is occasionally seen using these clones to perform various tasks, such as research and physical labor. The pinnacle of the project is Cassandra Spender (Veronica Cartwright), mother of Jeffrey Spender (Chris Owens) and ex-wife of The Smoking Man. Cassandra is a hybrid created through a process other than cloning, and worked on by both the Syndicate and the aliens themselves, although the exact methods used to transform her are never fully revealed. The experiment presumably began when she, along with other family members of the Syndicate, were turned over to the colonists in 1973. For years, Cassandra was under the mistaken impression that she was to be an emissary of the aliens to spread a higher spiritual understanding to humanity, but after her final abduction in the late 1990s she comes to realize the truth. She is killed, along with most of the Syndicate, by the alien rebels. After being exposed to an alien artifact, Mulder slipped into a coma, although he was imbued with telepathic abilities. In order to find a cure for her partner, Dana Scully (Gillian Anderson) discovered a book containing Native American beliefs and practices; the books described how one man would be able to hold off the forces of the apocalypse and become humanity's savior. Meanwhile, The Smoking Man took Mulder and prepped to have his genetic material—the same material that allowed Mulder to become telepathic—implanted into him. He believed that Mulder had, in effect, become a perfect alien-human hybrid and that by taking his genetic material, he would be able to continue "The Project" and survive the coming alien onslaught. With nearly every Syndicate member dead, the Colonists began to clear up any evidence of alien life and began to create human replacements called "Super Soldiers", on which the Colonists had been working covertly as an alternative slave race should the hybridization experiments be unsuccessful. To create Super Soldiers, the Colonists infect humans with a new strain of their virus, which slowly destroys and then rebuilds the body of the host. This process seems to involve a lengthy surgical procedure on abductees as opposed to simple infection (as with the black oil). As they have normal red blood and can replace individuals within powerful positions, they provide an ideal way for the Colonists to infiltrate humanity to ensure that the plans for colonization are uninterrupted. They are identifiable however by small spiny protrusions on the backs of their necks or by detailed analysis of a blood sample which shows their DNA exists as a complex with iron. Although they cannot shapeshift, Super Soldiers are practically unstoppable. They can survive being crushed by a garbage compactor, decapitation, and can rip through steel with their bare hands. The only known way to kill them takes advantage of their metallic biochemistry: their bodies are torn apart by the magnetic fields present near large deposits of magnetite ore. The Super Soldiers quietly fill the positions of power previously occupied by Syndicate members and rarely use human conspirators. By the end of season 9 they virtually replaced the Syndicate and were successful in chasing Mulder and Scully out of the FBI so that they can no longer investigate the X-Files and interfere with their plans. As of their last appearance, they were preparing for the final invasion in 2012. Critical reception to the super-soldiers was mixed. The A.V. Club was highly critical of the final season and its mythology story, noting that the "new serialized storylines about so-called 'super soldiers'" resulted in a "clumsy mish-mash of" ideas that worked and did not. Not all reviews were negative, however. Den of Geek also named the Super Soldiers among "The Top 10 X-Files Baddies". The site wrote moderately positively of the Super soldiers and applauded the show's continuity, citing their decision to make Billy Miles, a character who appeared in the series' pilot episode, the first super-soldier. However, the review did call them not "as interesting as what came before them". The site awarded the super soldiers a "Coolness" rating of three out of five, an "Impact" rating of three out of five, and a "Creepiness" rating of two out of five. References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Dialects_of_Chinese] | [TOKENS: 8365] |
Contents Varieties of Chinese There are hundreds of local Chinese language varieties[b] forming a branch of the Sino-Tibetan language family, many of which are not mutually intelligible. Variation is particularly strong in the more mountainous southeast part of mainland China. The varieties are typically classified into several groups: Mandarin, Wu, Min, Xiang, Gan, Jin, Hakka and Yue, though some varieties remain unclassified. These groups are neither clades nor individual languages defined by mutual intelligibility, but are identified by common correspondences with selected features of Middle Chinese. Chinese varieties differ in their phonology, vocabulary and syntax. Southern varieties tend to have fewer initial consonants than northern and central varieties, but more often preserve the Middle Chinese final consonants. All have phonemic tones, with northern varieties tending to have fewer distinctions than southern ones. Many have tone sandhi, with the most complex patterns in the coastal area from Zhejiang to eastern Guangdong. There are major lexical and grammatical differences between northern and southern varieties, but often some northern areas have features found in the south, and vice versa. Standard Chinese takes its phonology from the Beijing dialect, with vocabulary from the Mandarin group and grammar based on literature in the modern written vernacular. It is one of the official languages of China, the de facto official language of Taiwan and one of the four official languages of Singapore. It has become a pluricentric language, with differences in pronunciation and vocabulary between them. Standard Chinese is also one of the six official languages of the United Nations. History At the end of the 2nd millennium BC, a form of Chinese was spoken in a compact area along the lower Wei River and middle Yellow River. Use of this language expanded eastwards across the North China Plain into Shandong, and then southwards into the Yangtze River valley and the hills of south China. Chinese eventually replaced many of the languages previously dominant in these areas, and forms of the language spoken in different regions began to diverge. During periods of political unity there was a tendency for states to promote the use of a standard language across the territory they controlled, in order to facilitate communication between people from different regions. The first evidence of dialectal variation is found in the texts of the Spring and Autumn period (771–476 BC). Although the Zhou royal domain was no longer politically powerful, its speech still represented a model for communication across China. The Fangyan (early 1st century AD) is devoted to differences in vocabulary between regions. Commentaries from the Eastern Han (25–220 AD) provide significant evidence of local differences in pronunciation. The Qieyun, a rime dictionary published in 601, noted wide variations in pronunciation between regions, and was created with the goal of defining a standard system of pronunciation for reading the classics. This standard is known as Middle Chinese, and is believed to be a diasystem, based on a compromise between the reading traditions of the northern and southern capitals. The North China Plain provided few barriers to migration, which resulted in relative linguistic homogeneity over a wide area. Contrastingly, the mountains and rivers of southern China contain all six of the other major Chinese dialect groups, with each in turn featuring great internal diversity, particularly in Fujian. Until the mid-20th century, most Chinese people spoke only their local language. As a practical measure, officials of the Ming and Qing dynasties carried out the administration of the empire using a common language based on Mandarin varieties, known as Guānhuà (官話/官话 'officer speech'). While never formally defined, knowledge of this language was essential for a career in the imperial bureaucracy. In the early years of the Republic of China, Literary Chinese was replaced as the written standard by written vernacular Chinese, which was based on northern varieties. In the 1930s, a standard national language was adopted that had pronunciation based on the Beijing dialect, but vocabulary drawn from a range of Mandarin varieties, and grammar based on literature in the modern written vernacular. Standard Chinese is the official spoken language of the People's Republic of China and Taiwan, and is one of the four official languages of Singapore. It has become a pluricentric language, with differences in pronunciation and vocabulary between them. Standard Chinese is much more widely studied than any other variety of Chinese, and its use is now dominant in public life on the mainland. Outside of China and Taiwan, the only varieties of Chinese commonly taught in university courses are Standard Chinese and Cantonese. Local varieties from different areas of China are often mutually unintelligible, differing at least as much as different Romance languages and perhaps even as much as Indo-European languages as a whole. As with the Romance languages descended from Latin, the ancestral language was spread by imperial expansion over substrate languages around 2000 years ago, by the Qin and Han empires in China and the Roman Empire in Europe. Medieval Latin remained the standard for scholarly and administrative writing in Western Europe for centuries, influencing local varieties much like Literary Chinese did in China. In both cases, local forms of speech diverged from both the literary standard and each other, producing dialect continua with mutually unintelligible varieties separated by long distances. However, a major difference between China and Western Europe is the historical reestablishment of political unity in 6th century China by the Sui dynasty, a unity that has persisted with relatively brief interludes until the present day. Meanwhile, Europe remained politically decentralized, developing into numerous independent states. Vernacular writing using the Latin alphabet supplanted Latin itself, and states eventually developed their own standard languages. In China, Literary Chinese was predominantly used in formal writing until the early 20th century. Written Chinese, read with different local pronunciations, continued to serve as a source of vocabulary for the local varieties. The new standard written vernacular Chinese, the counterpart of spoken Standard Chinese, is similarly used as a literary form by speakers of all varieties. Classification Dialectologist Jerry Norman estimated that there are hundreds of mutually unintelligible varieties of Chinese. These varieties form a dialect continuum, in which differences in speech generally become more pronounced as distances increase, although there are also some sharp boundaries. However, the rate of change in mutual intelligibility varies immensely depending on region. For example, the varieties of Mandarin spoken in all three northeastern Chinese provinces are mutually intelligible, but in the province of Fujian, where Min varieties predominate, the speech of neighbouring counties or even villages may be mutually unintelligible. Classifications of Chinese varieties in the late 19th century and early 20th century were based on impressionistic criteria. They often followed river systems, which were historically the main routes of migration and communication in southern China. The first scientific classifications, based primarily on correspondences with Middle Chinese voiced initials, were produced by Wang Li in 1936 and Li Fang-Kuei in 1937, with minor modifications by other linguists since. The conventionally accepted set of seven dialect groups first appeared in the second edition (1980) of Yuan Jiahua's dialectology handbook: The Language Atlas of China (1987) follows a classification of Li Rong, distinguishing three further groups: Some varieties remain unclassified, including the Danzhou dialect (northwestern Hainan), Mai (southern Hainan), Waxiang (northwestern Hunan), Xiangnan Tuhua (southern Hunan), Shaozhou Tuhua (northern Guangdong), and the forms of Chinese spoken by the She people (She Chinese) and the Miao people. She Chinese, Xiangnan Tuhua, Shaozhou Tuhua and unclassified varieties of southwest Jiangxi appear to be related to Hakka. Most of the vocabulary of the Bai language of Yunnan appears to be related to Chinese words, though many are clearly loans from the last few centuries. Some scholars have suggested that it represents a very early branching from Chinese, while others argue that it is a more distantly related Sino-Tibetan language overlaid with two millennia of loans. Jerry Norman classified the traditional seven dialect groups into three zones: Northern (Mandarin), Central (Wu, Gan, and Xiang) and Southern (Hakka, Yue, and Min). He argued that the varieties of the Southern zone are derived from a standard used in the Yangtze valley during the Han dynasty (206 BC – 220 AD), which he called Old Southern Chinese, while the Central zone was a transitional area of varieties that were originally of southern type, but overlain with centuries of Northern influence. Hilary Chappell proposed a refined model, dividing Norman's Northern zone into Northern and Southwestern areas, and his Southern zone into Southeastern (Min) and Far Southern (Yue and Hakka) areas, with Pinghua transitional between Southwestern and Far Southern areas. The long history of migration of peoples and interaction between speakers of different varieties makes it difficult to apply the tree model to Chinese. Scholars account for the transitional nature of the central varieties in terms of wave models. Iwata argues that innovations have been transmitted from the north across the Huai River to the Lower Yangtze Mandarin area and from there southeast to the Wu area and westwards along the Yangtze River valley and thence to southwestern areas, leaving the hills of the southeast largely untouched. Some dialect boundaries, such as between Wu and Min, are particularly abrupt, while others, such as between Mandarin and Xiang or between Min and Hakka, are much less clearly defined. Several east-west isoglosses run along the Huai and Yangtze Rivers. A north-south barrier is formed by the Tianmu and Wuyi Mountains. Most assessments of mutual intelligibility of varieties of Chinese in the literature are impressionistic. Functional intelligibility testing is time-consuming in any language family, and usually not done when more than 10 varieties are to be compared. However, one 2009 study aimed to measure intelligibility between 15 Chinese provinces. In each province, 15 university students were recruited as speakers and 15 older rural inhabitants recruited as listeners. The listeners were then tested on their comprehension of isolated words and of particular words in the context of sentences spoken by speakers from all 15 of the provinces surveyed. The results demonstrated significant levels of unintelligibility between areas, even within the Mandarin group. In a few cases, listeners understood fewer than 70% of words spoken by speakers from the same province, indicating significant differences between urban and rural varieties. As expected from the wide use of Standard Chinese, speakers from Beijing were understood more than speakers from elsewhere. The scores supported a primary division between northern groups (Mandarin and Jin) and all others, with Min as an identifiable branch. Because speakers share a standard written form, i.e. written vernacular Mandarin Chinese, and have a common cultural heritage with long periods of political unity, the varieties are popularly perceived among native speakers as variants of a single Chinese language, and this is also the official position of the government of the People's Republic of China and formerly the position of the government of the Republic of China (Taiwan). Conventional English-language usage in Chinese linguistics is to use dialect for the speech of a particular place (regardless of status), with regional groupings like Mandarin and Wu called dialect groups. Reflecting its internal diversity, Chinese is usually considered a language family within the Sino-Tibetan phylum. Estimates of the number of languages implied by the criterion of mutual intelligibility range from dozens to hundreds, but no one has attempted to delimit them consistently. Some authors refer to each of the eight main groups such as Wu or Yue as a "language", but each of these groups contains mutually unintelligible varieties. ISO 639-3 and the Ethnologue assign language codes to each of the top-level groups listed above except Min and Pinghua, whose subdivisions are assigned seven and two codes respectively. Some linguists refer to the local varieties as languages, numbering in the hundreds. The Chinese term fāngyán 方言, literally 'place speech', was the title of the first work of Chinese dialectology in the Han dynasty, and has had a range of meanings in the millennia since. It is used for any regional subdivision of Chinese, from the speech of a village to major branches such as Mandarin and Wu, regardless of intelligibility. Linguists writing in Chinese often qualify the term to distinguish different levels of classification. All these terms have customarily been translated into English as dialect, a practice that has been criticized as confusing and inconsistent with typical usage. John DeFrancis proposed the neologism regionalect to serve as a translation for fāngyán when referring to the top-level groups, which are mutually unintelligible. Victor Mair coined the term topolect as a translation for all uses of fāngyán. The latter term appears in The American Heritage Dictionary of the English Language. Phonology The usual unit of analysis is the syllable, traditionally analysed as consisting of an initial consonant, a final and a tone. In general, southern varieties have fewer initial consonants than northern and central varieties, but more often preserve the Middle Chinese final consonants. Some varieties, such as Cantonese, Hokkien and Shanghainese, include syllabic nasals as independent syllables. In the 42 varieties surveyed in the Great Dictionary of Modern Chinese Dialects, the number of initials (including a zero initial) ranges from 15 in some southern varieties to a high of 35 in Chongming dialect, spoken in Chongming Island, Shanghai. The initial system of the Fuzhou dialect of northern Fujian is a minimal example. With the exception of /ŋ/, which is often merged with the zero initial, the initials of this variety are present in all Chinese varieties, although several do not distinguish /n/ from /l/. However, most varieties have additional initials, due to a combination of innovations and retention of distinctions from Middle Chinese: Chinese finals may be analysed as an optional medial glide, a main vowel and an optional coda. Conservative vowel systems, such as those of Gan varieties, have high vowels /i/, /u/ and /y/, which also function as medials, mid vowels /e/ and /o/, and a low /a/-like vowel. In other varieties, including Mandarin varieties, /o/ has merged with /a/, leaving a single mid vowel with a wide range of allophones. Many varieties, particularly in northern and central China, have apical or retroflex vowels, which are syllabic fricatives derived from high vowels following sibilant initials. In many Wu varieties, vowels and final glides have monophthongized, producing a rich inventory of vowels in open syllables. Reduction of medials is common in Yue varieties. The Middle Chinese codas, consisting of glides /j/ and /w/, nasals /m/, /n/ and /ŋ/, and stops /p/, /t/ and /k/, are best preserved in southern varieties, particularly Yue varieties such as Cantonese. In some Min varieties, nasals and stops following open vowels have shifted to nasalization and glottal stops respectively. In Jin, Lower Yangtze Mandarin and Wu varieties, the stops have merged as a final glottal stop, while in most northern varieties they have disappeared. In Mandarin varieties final /m/ has merged with /n/, while some central varieties have a single nasal coda, in some cases realized as a nasalization of the vowel. All varieties of Chinese, like neighbouring languages in the Mainland Southeast Asia linguistic area, have phonemic tones. Each syllable may be pronounced with between three and seven distinct pitch contours, denoting different morphemes. For example, the Beijing dialect distinguishes mā (妈/媽 'mother'), má (麻 'hemp'), mǎ (马/馬 'horse) and mà (骂/罵 'to scold'). The number of tonal contrasts varies, with Northern varieties tending to have fewer distinctions than Southern ones. The tonal categories of modern varieties can be related by considering their derivation from the four tones of Middle Chinese, though cognate tonal categories in different varieties are often realized as quite different pitch contours. Middle Chinese had a three-way tonal contrast in syllables with vocalic or nasal endings. The traditional names of the tonal categories are 'level'/'even' (平 píng), 'rising' (上 shǎng) and 'departing' (去 qù). Syllables ending in a stop consonant /p/, /t/ or /k/ (checked syllables) had no tonal contrasts but were traditionally treated as a fourth tone category, 'entering' (入 rù), corresponding to syllables ending in nasals /m/, /n/, or /ŋ/. The tones of Middle Chinese, as well as similar systems in neighbouring languages, experienced a tone split conditioned by syllabic onsets. Syllables with voiced initials tended to be pronounced with a lower pitch, and by the late Tang dynasty, each of the tones had split into two registers conditioned by the initials, known as "upper" (阴/陰 yīn) and "lower" (阳/陽 yáng). When voicing was lost in all varieties except in the Wu and Old Xiang groups, this distinction became phonemic, yielding eight tonal categories, with a six-way contrast in unchecked syllables and a two-way contrast in checked syllables. Cantonese maintains these eight tonal categories and has developed an additional distinction in checked syllables. (The latter distinction has disappeared again in many varieties.) However, most Chinese varieties have reduced the number of tonal distinctions. For example, in Mandarin, the tones resulting from the split of Middle Chinese rising and departing tones merged, leaving four tones. Furthermore, final stop consonants disappeared in most Mandarin varieties, and such syllables were distributed amongst the four remaining tones in a manner that is only partially predictable. In Wu, voiced obstruents were retained, and the tone split never became phonemic: the higher-pitched allophones occur with initial voiceless consonants, and the lower-pitched allophones occur with initial voiced consonants. (Traditional Chinese classification nonetheless counts these as different tones.) Most Wu varieties retain the tone categories of Middle Chinese, but in Shanghainese several of these have merged. Many Chinese varieties exhibit tone sandhi, in which the realization of the tone of a syllable is affected by the tones of adjacent syllables in a compound word or phrase. For example, in Standard Chinese a third tone changes to a second tone when followed by another third tone. Particularly complex sandhi patterns are found in Wu varieties and coastal Min varieties. In northern varieties, many particles or suffixes are weakly stressed or atonic syllables. These are much rarer in southern varieties. Such syllables have a reduced pitch range that is determined by the preceding syllable. Vocabulary Most morphemes in Chinese varieties are monosyllables descended from Old Chinese words, and have cognates in all varieties: All varieties use cognates of lái (来) and qù (去) for 'come' and 'go' respectively. The verbs zǒu (走) and xíng (行) originally meant 'run' and 'walk' respectively, senses that are retained in the south. In the north, the former has shifted to become the main word for 'walk', and a new verb pǎo (跑) is used for 'run'. Cognates of the old verb for 'stand', lì (立), are now limited to scattered areas of northern and central China, having been replaced in the north by cognates of zhàn (站) and in the south by cognates of jì (踦). Southern varieties also include distinctive substrata of vocabulary of non-Chinese origin. Some of these words may have come from Tai–Kadai and Austroasiatic languages. Grammar Chinese varieties generally lack inflectional morphology and instead express grammatical categories using analytic means such as particles and prepositions. There are major differences between northern and southern varieties, but often some northern areas share features found in the south, and vice versa. The usual unmarked word order in Chinese varieties is subject–verb–object, with other orders used for emphasis or contrast. Modifiers usually precede the word they modify, so that adjectives precede nouns. Instances in which the modifier follows the head are mainly found in the south, and are attributed to substrate influences from languages formerly dominant in the area, especially Kra–Dai languages. Nouns in Chinese varieties are generally not marked for number. As in languages of the Mainland Southeast Asia linguistic area, Chinese varieties require an intervening classifier when a noun is preceded by a demonstrative or numeral. The inventory of classifiers tends to be larger in the south than in the north, where some varieties use only the general classifier cognate with ge 个/個. Most Chinese varieties form the diminutive, often indicating familiarity or even becoming a marker of nouns, by suffixing a reduced form of a word for 'son, child', typically cognates of ér 儿/兒 or zǐ 子. Reduction of the former to rhotacization is a pervasive feature of Beijing speech, partially incorporated into Standard Chinese. Min varieties use a distinctive morpheme derived from Proto-Min *kiɑnB 囝 'son'. In Taiwanese Hokkien, this suffix has been reduced to -á though a process that can be traced through 17th-century accounts of Hokkien. In several southern and central varieties, the original diminutive suffix is unclear, as it has been reduced to a final glottal stop, glottal constriction, or a pattern of tonal change. First- and second-person pronouns are cognate across all varieties. For third-person pronouns, Jin, Mandarin, and Xiang varieties have cognate forms, but other varieties generally use forms that originally had a velar or glottal initial: Plural personal pronouns may be marked with a suffix, noun or phrase in different varieties. The suffix men 们/們 is common in the north, but several different suffixes are use elsewhere. In some varieties, especially in the Wu area, different suffixes are used for first, second and third person pronouns. Case is not marked, except in varieties in the Qinghai–Gansu sprachbund. The forms of demonstratives vary greatly, with few cognates between different areas. A two-way distinction between proximal and distal is most common, but some varieties have a single neutral demonstrative, while others distinguish three or more on the basis of distance, visibility or other properties. An extreme example is found in a variety spoken in Yongxin County, Jiangxi, where five grades of distance are distinguished. Attributive constructions typically have the form NP/VP + ATTR + NP, where the last noun phrase is the head and the attributive marker is usually a cognate of de 的 in the north or a classifier in the south. The latter pattern is also common in the languages of Southeast Asia. A few varieties in the Jiang–Huai, Wu, southern Min and Yue areas feature the old southern pattern of a zero attributive marker. Nominalization of verb phrases or predicates is achieved by following them with a marker, usually the same as the attributive marker, though some varieties use a different marker. All varieties have transitive and intransitive verbs. Instead of adjectives, Chinese varieties use stative verbs, which can function as predicates but differ from intransitive verbs in being modifiable by degree adverbs. Ditransitive sentences vary, with northern varieties placing the indirect object before the direct object and southern varieties using the reverse order. 我 Wǒ 1SG 给/給 gěi give 你 nǐ 2SG 一 yī one 本 běn CL 书/書。 shū. book 我 给/給 你 一 本 书/書。 Wǒ gěi nǐ yī běn shū. 1SG give 2SG one CL book 'I give you a book.' 我 ŋai˩˨ 1SG 分 pun˦ give 本 pun˧˩ CL 书/書 su˦ book 分 pun˦ give 你。 n˩˨ 2SG 我 分 本 书/書 分 你。 ŋai˩˨ pun˦ pun˧˩ su˦ pun˦ n˩˨ 1SG give CL book give 2SG 'I give you a book.' 我 ŋɔ 1SG 把 pa give 本 pən CL 书/書 sɿ book 你。 ɲɪ 2SG 我 把 本 书/書 你。 ŋɔ pa pən sɿ ɲɪ 1SG give CL book 2SG 'I give you a book.' All varieties have copular sentences of the form NP1 + COP + NP2, though the copula varies. Most Yue and Hakka varieties use a form cognate with xì 係 'to connect'. All other varieties use a form cognate with shì 是, which was a demonstrative in Classical Chinese but began to be used as a copula from the Han period. All varieties form existential sentences with a verb cognate with yǒu 有, which can also be used as a transitive verb indicating possession. Most varieties use a locative verb cognate to zài 在, but Min, Wu and Yue varieties use several different forms. All varieties allow sentences of the form NP + VP1 + COMP + VP2, with a verbal complement VP2 containing a stative verb describing the manner or extent of the main verb. In northern varieties, the marker is a cognate of de 得, but many southern varieties distinguish between manner and extent complements using different markers. Standard Chinese does not allow an object to co-occur with a verbal complement, but other varieties permit an object between the marker and the complement. A characteristic feature of Chinese varieties is in situ questions: Other question forms are also common: A sentence is negated by placing a marker before the verb. Old Chinese had two families of negation markers starting with *p- and *m-, respectively. Northern and Central varieties tend to use a word from the first family, cognate with Beijing bù 不, as the ordinary negator. A word from the second family is used as an existential negator 'have not', as in Beijing méi 沒 and Shanghai m2. In Mandarin varieties this word is also used for 'not yet', whereas in Wu and other groups a different form is typically used. In Southern varieties, negators tend to come from the second family. The ordinary negators in these varieties are all derived from a syllabic nasal *m̩, though it has a level tone in Hakka and Yue and a rising tone in Min. Existential negators derive from a proto-form *mau, though again the tonal category varies between groups. Many Chinese varieties allow a modal auxiliary verb before the main verb, connoting a modality of possibility, necessity or volition. These auxiliaries are derived from grammaticalized verbs, and vary between varieties. Chinese varieties generally indicate the roles of nouns with respect to verbs using prepositions derived from grammaticalized verbs. Varieties differ in the set of prepositions used, with northern varieties tending to use a substantially larger inventory, including disyllabic and trisyllabic prepositions. In northern varieties, the preposition bǎ 把 may be used to move the object before the verb (the "disposal" construction). Similar structures using several different prepositions are used in the south, but tend to be avoided in more colloquial speech. Chinese varieties typically have prepositions to mark the agent of the verb, in which case the subject is the patient. There are many forms of this marker, derived from different verbs, including This construction differs from the passive voice of most languages in that the agent may not be omitted and the construction implies that the action has an adverse effect on the patient. In more formal speech, the standard language uses bèi 被, after which the agent may be omitted. It is also used in non-adverse situations, especially is writing, due to the influence of European languages. Comparative constructions are expressed with a prepositional phrase before the stative verb in most northern and central varieties, as well as Northern Min and Hakka, while other southern varieties retain the older form in which the prepositional phrase follows the stative verb. The preposition is usually bǐ 比 in the north, with other forms used elsewhere. Some Southern Min varieties use an adverbial comparative. 他 tā 3SG 比 bǐ CMP 我 wǒ 1SG 高。 gāo. tall 他 比 我 高。 tā bǐ wǒ gāo. 3SG CMP 1SG tall 'He/she is taller than me.' 佢 kʰøy˨˦ 3SG 高 kow˥ tall 过/過 kwɔ˦ CMP 我。 ŋɔ˨˦. 1SG 佢 高 过/過 我。 kʰøy˨˦ kow˥ kwɔ˦ ŋɔ˨˦. 3SG tall CMP 1SG 'He/she is taller than me.' 伊 i˥ 3SG 較 kʰaʔ˧ more 躼 lo˥˩ tall 我。 gua˥˩. 1SG 伊 較 躼 我。 i˥ kʰaʔ˧ lo˥˩ gua˥˩. 3SG more tall 1SG 'He/she is taller than me.' Chinese varieties tend to indicate aspect using markers following the main verb. The markers, usually derived from verbs, vary widely in both their forms and their degree of grammaticalization, from independent verbs, through complements to bound suffixes. Southern varieties tend to have richer aspect systems making more distinctions than northern ones. Sociolinguistics Within mainland China, there has been a persistent Promotion of Putonghua drive; for instance, the education system is entirely Mandarin-medium from the second year onward. However, usage of local variety is tolerated and socially preferred in many informal situations. In Hong Kong, written Cantonese is not used in formal documents, and within the PRC a character set closer to Mandarin tends to be used. At the national level, differences in dialect generally do not correspond to political divisions or categories, and this has for the most part prevented dialect from becoming the basis of identity politics. Historically, many of the people who promoted Chinese nationalism were from southern China and did not natively speak Mandarin, and even leaders from northern China rarely spoke with the standard accent. For example, Mao Zedong often emphasized his origins in Hunan in speaking, rendering much of what he said incomprehensible to many Chinese. Chiang Kai-shek and Sun Yat-sen were also from southern China, and this is reflected in their conventional English names reflecting Cantonese pronunciations for their given names, and differing from their pinyin spellings Jiǎng Jièshí and Sūn Yìxiān. One consequence of this is that China does not have a well-developed tradition of spoken political rhetoric, and most Chinese political works are intended primarily as written works rather than spoken works. Another factor that limits the political implications of dialect is that it is very common within an extended family for different people to know and use different varieties. Before 1945, most of the population of Taiwan were Han Chinese, some of whom spoke Japanese, in addition to Taiwanese Hokkien or Hakka, with a minority of Taiwanese aborigines, who spoke Formosan languages. When the Kuomintang retreated to the island after losing the Chinese Civil War in 1949, they brought a substantial influx of speakers of Northern Chinese (and other varieties from across China), and viewed the use of Mandarin as part of their claim to be a legitimate government of the whole of China. Education policy promoted the use of Mandarin over the local languages, and was implemented especially rigidly in elementary schools, with punishments and public humiliation for children using other languages at school. From the 1970s, the government promoted adult education in Mandarin, required Mandarin for official purposes, and encouraged its increased use in broadcasting. Over a 40-year period, these policies succeeded in spreading the use and prestige of Mandarin through society at the expense of the other languages. They also aggravated social divisions, as Mandarin speakers found it difficult to find jobs in private companies but were favored for government positions. From the 1990s, Taiwanese languages (Taiwanese Hokkien, Taiwanese Hakka and the Formosan languages) were offered in elementary and middle schools, first in Yilan county, then in other areas governed by elected Democratic Progressive Party (DPP) politicians, and finally throughout the island. When Singapore became independent in 1965, the island nation was extremely linguistically diverse. Over 75% of the population were ethnically Chinese. In the 1957 census, they gave their native languages as Hokkien (39.8%), Teochew (22.6%), Cantonese (20%), Hainanese (6.8%), Hakka (6.1%) and six other Chinese varieties (4.7%). The official languages of the new state were defined as English, Standard Chinese (locally called Mandarin or Huáyǔ 华语), Malay and Tamil. In 1966, the Singaporean government implemented a bilingual education policy, under which students learned both English and the official language identified with their ethnicity. For Chinese Singaporeans this was Mandarin, even though hardly any of them spoke it natively. The government argued that Mandarin was more economically valuable, and speaking Mandarin would help Chinese Singaporeans retain their heritage, as Mandarin contains a cultural repository of values and traditions that are identifiable to all Chinese, regardless of dialect group. The Goh Report, an evaluation of Singapore's education system by Goh Keng Swee in 1976, found that the bilingual policy was failing, with less than 40% of the student population managing to attain minimum levels of competence in two languages. The home use of non-Mandarin varieties was pointed to as the major case of this failure. Lee Kuan Yew, then Prime Minister, argued that the culture, prosperity and security of the nation were at risk. The government response was a massive Speak Mandarin Campaign, launched by Lee Kuan Yew in 1979, aiming to discourage use of other Chinese varieties. The stated motivations were to make the bilingual education policy effective, to promote the cultural cohesion of Chinese Singaporeans, and to provide them with a common language that was neutral between existing varieties and would also be useful outside Singapore. In 1980, the government announced that all Chinese students would be required to register with names in pinyin rather than other varieties, but a majority of parents refused to comply, and the policy was abandoned in 1991. Nevertheless, the relentless official pressure and denigration of "dialects", together with the rise of Standard Chinese in China, had the desired effect. The proportion of Chinese homes mostly using non-Mandarin varieties had fallen from more than 80% to less than 20% by 2010, with similar declines in the three main varieties (Hakka, Min, and Yue). From 1987, all education has been in English, with the designated other official language taught as a second language. Since 1994, the Campaign has tended to focus on promoting Mandarin to the increasingly numerous English-speaking Chinese Singaporeans. In southern China (not including Hong Kong and Macau), where the difference between Standard Chinese and local varieties is particularly pronounced, well-educated Chinese are generally fluent in Standard Chinese, and most people have at least a good passive knowledge of it, in addition to being native speakers of the local variety. The choice varies based on the social situation. Standard Chinese is usually considered more formal and is required when speaking to a person who does not understand the local variety. The local idiom (be it non-Standard Chinese or non-Mandarin altogether) is generally considered more intimate and is used among close family members and friends and in everyday conversation within the local area. Chinese speakers will frequently code switch between Standard Chinese and the local variety. Parents will generally speak to their children in the local variety, and the relationship between it and Mandarin appears to be mostly stable, even a diglossia. Local varieties are valued as symbols of regional cultures. People generally are tied to the hometown and therefore the hometown variety, instead of a broad linguistic classification. For example, a person from Wuxi may claim that he speaks Wuxi dialect, even though it is similar to Shanghainese (another Wu dialect). Likewise, a person from Xiaogan may claim that he speaks Xiaogan dialect. Linguistically, Xiaogan dialect belongs to the Mandarin group, but the pronunciation and diction are quite different from spoken Standard Chinese. Knowing the local variety is of considerable social benefit, and most Chinese who permanently move to a new area will attempt to pick up the local variety. Learning a new variety is usually done informally through a process of immersion and recognizing sound shifts. Generally the differences are more pronounced lexically than grammatically. Typically, a speaker of one Chinese variety will need about a year of immersion to understand the local variety and about three to five years to become fluent in speaking it. Because of the range of varieties spoken, there are usually few formal methods for learning a local variety. Due to the variety in Chinese speech, Mandarin speakers from each area of China are very often prone to fuse or "translate" words from their local language into their Mandarin conversations. In addition, each area of China has its recognizable accents while speaking Mandarin. Generally, the nationalized standard form of Mandarin pronunciation is only heard on news and radio broadcasts. Even in the streets of Beijing, the flavor of Mandarin varies in pronunciation from the Mandarin heard on the media. In Taiwan, as most people at least understand, if not speak, Taiwanese Hokkien, Taiwanese Mandarin has acquired many loanwords from Hokkien. Some of these are directly implanted into Mandarin, as in the case of "蚵仔煎," "oyster omelet", which most Taiwanese people would call by its Hokkien name (ô-á-tsian) rather than its Mandarin one (é-zǐ-jiān). In other cases, Mandarin creates new words to imitate Hokkien ones, as in the case of "哇靠," ("damn") from the Hokkien "goák-hàu", but pronounced in Mandarin as "wā-kào". While most Taiwanese people understand Hokkien, far fewer understand or speak Hakka, and Taiwanese Mandarin therefore contains fewer Hakka loanwords. Formerly, there was a misconception among linguists that language in Taiwan was primarily tied to ethnicity (i.e., Minnan people speak Hokkien, Hakka people speak Hakka, and Indigenous people speak the language corresponding to their tribe). Researchers later realized that, although it is true that most Minnan people speak Hokkien, most Hakka people also speak Hokkien and many of them do not speak Hakka. Hokkien remains the most prestigious language other than Mandarin and English in Taiwan. Although Hokkien, Hakka, and Taiwan's many Indigenous languages have now been elevated to the status of national languages, it is notable that there is a clear de facto gradient of valuation of these languages. For instance, on the Taipei metro when nearing the next station, a passenger will hear these languages in the following order: Mandarin, English, Hokkien, Hakka, sometimes followed by Japanese and Korean. See also Notes References Further reading External links |
======================================== |
[SOURCE: https://github.com/sponsors] | [TOKENS: 466] |
Navigation Menu Search code, repositories, users, issues, pull requests... Provide feedback We read every piece of feedback, and take your input very seriously. Saved searches Use saved searches to filter your results more quickly To see all available qualifiers, see our documentation. Support the developers who power open source GitHub Sponsors allows the developer community to financially support the open source projects they depend on, directly on GitHub Given back to our maintainers Regions supported globally Organizations sponsoring Launch a page in minutes and showcase Sponsors buttons on your GitHub profile and repositories! You will automatically appear in worldwide recommendations to partners who are eager to invest in their open source dependencies. Find and fund your dependencies in a single transaction. The discovery and bulk sponsorship tools are easy to use. Flexible payment enables you to choose when and how you pay. Corporations like Microsoft like it because all their payments go out as a single invoice. Everyone should be able to contribute to open source. Making open source a viable, financially rewarding career path helps contribute to our digital infrastructure. More funding, more projects, more software to power our world. Web-Check provides real-time monitoring for uptime, speed, and user experience, trusted by businesses like Amazon, Shopify, and Airbnb. OpenWebUI simplifies the development of interactive GenAI apps using LLMs, used by developers at companies like Microsoft, IBM, and GitHub. cURL is included in almost every modern device–smartphones, cars, TVs, laptops, servers, consoles, printers, and beyond. Sign in and start by navigating to your dependencies, your explore tab, trending repositories, or collections. When a repository has the Sponsor graphic, you can sponsor them directly. You can become a sponsored developer by joining GitHub Sponsors, completing your sponsored developer profile, creating sponsorship tiers, submitting your bank and tax information, and enabling two-factor authentication for your account on GitHub.com. Learn more about getting paid for contributions Yes. Your tax information must be on file in order for GitHub to make payments to your bank account. The required tax documents may vary based on your location. Note that we do our best to help you with the Sponsors program, but we’re unable to provide tax guidance. Learn more about tax information for GitHub Sponsors Site-wide Links Get tips, technical guides, and best practices. Twice a month. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/OpenAI#cite_note-androidlaunch-68] | [TOKENS: 8773] |
Contents OpenAI OpenAI is an American artificial intelligence research organization comprising both a non-profit foundation and a controlled for-profit public benefit corporation (PBC), headquartered in San Francisco. It aims to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". OpenAI is widely recognized for its development of the GPT family of large language models, the DALL-E series of text-to-image models, and the Sora series of text-to-video models, which have influenced industry research and commercial applications. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI. The organization was founded in 2015 in Delaware but evolved a complex corporate structure. As of October 2025, following restructuring approved by California and Delaware regulators, the non-profit OpenAI Foundation holds 26% of the for-profit OpenAI Group PBC, with Microsoft holding 27% and employees/other investors holding 47%. Under its governance arrangements, the OpenAI Foundation holds the authority to appoint the board of the for-profit OpenAI Group PBC, a mechanism designed to align the entity’s strategic direction with the Foundation’s charter. Microsoft previously invested over $13 billion into OpenAI, and provides Azure cloud computing resources. In October 2025, OpenAI conducted a $6.6 billion share sale that valued the company at $500 billion. In 2023 and 2024, OpenAI faced multiple lawsuits for alleged copyright infringement against authors and media companies whose work was used to train some of OpenAI's products. In November 2023, OpenAI's board removed Sam Altman as CEO, citing a lack of confidence in him, but reinstated him five days later following a reconstruction of the board. Throughout 2024, roughly half of then-employed AI safety researchers left OpenAI, citing the company's prominent role in an industry-wide problem. Founding In December 2015, OpenAI was founded as a not for profit organization by Sam Altman, Elon Musk, Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk as the co-chairs. A total of $1 billion in capital was pledged by Sam Altman, Greg Brockman, Elon Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), and Infosys. However, the actual capital collected significantly lagged pledges. According to company disclosures, only $130 million had been received by 2019. In its founding charter, OpenAI stated an intention to collaborate openly with other institutions by making certain patents and research publicly available, but later restricted access to its most capable models, citing competitive and safety concerns. OpenAI was initially run from Brockman's living room. It was later headquartered at the Pioneer Building in the Mission District, San Francisco. According to OpenAI's charter, its founding mission is "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity." Musk and Altman stated in 2015 that they were partly motivated by concerns about AI safety and existential risk from artificial general intelligence. OpenAI stated that "it's hard to fathom how much human-level AI could benefit society", and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly". The startup also wrote that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible", and that "because of AI's surprising history, it's hard to predict when human-level AI might come within reach. When it does, it'll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest." Co-chair Sam Altman expected a decades-long project that eventually surpasses human intelligence. Brockman met with Yoshua Bengio, one of the "founding fathers" of deep learning, and drew up a list of great AI researchers. Brockman was able to hire nine of them as the first employees in December 2015. OpenAI did not pay AI researchers salaries comparable to those of Facebook or Google. It also did not pay stock options which AI researchers typically get. Nevertheless, OpenAI spent $7 million on its first 52 employees in 2016. OpenAI's potential and mission drew these researchers to the firm; a Google employee said he was willing to leave Google for OpenAI "partly because of the very strong group of people and, to a very large extent, because of its mission." OpenAI co-founder Wojciech Zaremba stated that he turned down "borderline crazy" offers of two to three times his market value to join OpenAI instead. In April 2016, OpenAI released a public beta of "OpenAI Gym", its platform for reinforcement learning research. Nvidia gifted its first DGX-1 supercomputer to OpenAI in August 2016 to help it train larger and more complex AI models with the capability of reducing processing time from six days to two hours. In December 2016, OpenAI released "Universe", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites, and other applications. Corporate structure In 2019, OpenAI transitioned from non-profit to "capped" for-profit, with the profit being capped at 100 times any investment. According to OpenAI, the capped-profit model allows OpenAI Global, LLC to legally attract investment from venture funds and, in addition, to grant employees stakes in the company. Many top researchers work for Google Brain, DeepMind, or Facebook, which offer equity that a nonprofit would be unable to match. Before the transition, OpenAI was legally required to publicly disclose the compensation of its top employees. The company then distributed equity to its employees and partnered with Microsoft, announcing an investment package of $1 billion into the company. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft. OpenAI Global, LLC then announced its intention to commercially license its technologies. It planned to spend $1 billion "within five years, and possibly much faster". Altman stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need "more capital than any non-profit has ever raised" to achieve artificial general intelligence. The nonprofit, OpenAI, Inc., is the sole controlling shareholder of OpenAI Global, LLC, which, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI, Inc.'s nonprofit charter. A majority of OpenAI, Inc.'s board is barred from having financial stakes in OpenAI Global, LLC. In addition, minority members with a stake in OpenAI Global, LLC are barred from certain votes due to conflict of interest. Some researchers have argued that OpenAI Global, LLC's switch to for-profit status is inconsistent with OpenAI's claims to be "democratizing" AI. On February 29, 2024, Elon Musk filed a lawsuit against OpenAI and CEO Sam Altman, accusing them of shifting focus from public benefit to profit maximization—a case OpenAI dismissed as "incoherent" and "frivolous," though Musk later revived legal action against Altman and others in August. On April 9, 2024, OpenAI countersued Musk in federal court, alleging that he had engaged in "bad-faith tactics" to slow the company's progress and seize its innovations for his personal benefit. OpenAI also argued that Musk had previously supported the creation of a for-profit structure and had expressed interest in controlling OpenAI himself. The countersuit seeks damages and legal measures to prevent further alleged interference. On February 10, 2025, a consortium of investors led by Elon Musk submitted a $97.4 billion unsolicited bid to buy the nonprofit that controls OpenAI, declaring willingness to match or exceed any better offer. The offer was rejected on 14 February 2025, with OpenAI stating that it was not for sale, but the offer complicated Altman's restructuring plan by suggesting a lower bar for how much the nonprofit should be valued. OpenAI, Inc. was originally designed as a nonprofit in order to ensure that AGI "benefits all of humanity" rather than "the private gain of any person". In 2019, it created OpenAI Global, LLC, a capped-profit subsidiary controlled by the nonprofit. In December 2024, OpenAI proposed a restructuring plan to convert the capped-profit into a Delaware-based public benefit corporation (PBC), and to release it from the control of the nonprofit. The nonprofit would sell its control and other assets, getting equity in return, and would use it to fund and pursue separate charitable projects, including in science and education. OpenAI's leadership described the change as necessary to secure additional investments, and claimed that the nonprofit's founding mission to ensure AGI "benefits all of humanity" would be better fulfilled. The plan has been criticized by former employees. A legal letter named "Not For Private Gain" asked the attorneys general of California and Delaware to intervene, stating that the restructuring is illegal and would remove governance safeguards from the nonprofit and the attorneys general. The letter argues that OpenAI's complex structure was deliberately designed to remain accountable to its mission, without the conflicting pressure of maximizing profits. It contends that the nonprofit is best positioned to advance its mission of ensuring AGI benefits all of humanity by continuing to control OpenAI Global, LLC, whatever the amount of equity that it could get in exchange. PBCs can choose how they balance their mission with profit-making. Controlling shareholders have a large influence on how closely a PBC sticks to its mission. On October 28, 2025, OpenAI announced that it had adopted the new PBC corporate structure after receiving approval from the attorneys general of California and Delaware. Under the new structure, OpenAI's for-profit branch became a public benefit corporation known as OpenAI Group PBC, while the non-profit was renamed to the OpenAI Foundation. The OpenAI Foundation holds a 26% stake in the PBC, while Microsoft holds a 27% stake and the remaining 47% is owned by employees and other investors. All members of the OpenAI Group PBC board of directors will be appointed by the OpenAI Foundation, which can remove them at any time. Members of the Foundation's board will also serve on the for-profit board. The new structure allows the for-profit PBC to raise investor funds like most traditional tech companies, including through an initial public offering, which Altman claimed was the most likely path forward. In January 2023, OpenAI Global, LLC was in talks for funding that would value the company at $29 billion, double its 2021 value. On January 23, 2023, Microsoft announced a new US$10 billion investment in OpenAI Global, LLC over multiple years, partially needed to use Microsoft's cloud-computing service Azure. From September to December, 2023, Microsoft rebranded all variants of its Copilot to Microsoft Copilot, and they added MS-Copilot to many installations of Windows and released Microsoft Copilot mobile apps. Following OpenAI's 2025 restructuring, Microsoft owns a 27% stake in the for-profit OpenAI Group PBC, valued at $135 billion. In a deal announced the same day, OpenAI agreed to purchase $250 billion of Azure services, with Microsoft ceding their right of first refusal over OpenAI's future cloud computing purchases. As part of the deal, OpenAI will continue to share 20% of its revenue with Microsoft until it achieves AGI, which must now be verified by an independent panel of experts. The deal also loosened restrictions on both companies working with third parties, allowing Microsoft to pursue AGI independently and allowing OpenAI to develop products with other companies. In 2017, OpenAI spent $7.9 million, a quarter of its functional expenses, on cloud computing alone. In comparison, DeepMind's total expenses in 2017 were $442 million. In the summer of 2018, training OpenAI's Dota 2 bots required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks. In October 2024, OpenAI completed a $6.6 billion capital raise with a $157 billion valuation including investments from Microsoft, Nvidia, and SoftBank. On January 21, 2025, Donald Trump announced The Stargate Project, a joint venture between OpenAI, Oracle, SoftBank and MGX to build an AI infrastructure system in conjunction with the US government. The project takes its name from OpenAI's existing "Stargate" supercomputer project and is estimated to cost $500 billion. The partners planned to fund the project over the next four years. In July, the United States Department of Defense announced that OpenAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and xAI. In the same month, the company made a deal with the UK Government to use ChatGPT and other AI tools in public services. OpenAI subsequently began a $50 million fund to support nonprofit and community organizations. In April 2025, OpenAI raised $40 billion at a $300 billion post-money valuation, which was the highest-value private technology deal in history. The financing round was led by SoftBank, with other participants including Microsoft, Coatue, Altimeter and Thrive. In July 2025, the company reported annualized revenue of $12 billion. This was an increase from $3.7 billion in 2024, which was driven by ChatGPT subscriptions, which reached 20 million paid subscribers by April 2025, up from 15.5 million at the end of 2024, alongside a rapidly expanding enterprise customer base that grew to five million business users. The company’s cash burn remains high because of the intensive computational costs required to train and operate large language models. It projects an $8 billion operating loss in 2025. OpenAI reports revised long-term spending projections totaling approximately $115 billion through 2029, with annual expenditures projected to escalate significantly, reaching $17 billion in 2026, $35 billion in 2027, and $45 billion in 2028. These expenditures are primarily allocated toward expanding compute infrastructure, developing proprietary AI chips, constructing data centers, and funding intensive model training programs, with more than half of the spending through the end of the decade expected to support research-intensive compute for model training and development. The company's financial strategy prioritizes market expansion and technological advancement over near-term profitability, with OpenAI targeting cash-flow-positive operations by 2029 and projecting revenue of approximately $200 billion by 2030. This aggressive spending trajectory underscores both the enormous capital requirements of scaling cutting-edge AI technology and OpenAI's commitment to maintaining its position as a leader in the artificial intelligence industry. In October 2025, OpenAI completed an employee share sale of up to $10 billion to existing investors which valued the company at $500 billion. The deal values OpenAI as the most valuable privately owned company in the world—surpassing SpaceX as the world's most valuable private company. On November 17, 2023, Sam Altman was removed as CEO when its board of directors (composed of Helen Toner, Ilya Sutskever, Adam D'Angelo and Tasha McCauley) cited a lack of confidence in him. Chief Technology Officer Mira Murati took over as interim CEO. Greg Brockman, the president of OpenAI, was also removed as chairman of the board and resigned from the company's presidency shortly thereafter. Three senior OpenAI researchers subsequently resigned: director of research and GPT-4 lead Jakub Pachocki, head of AI risk Aleksander Mądry, and researcher Szymon Sidor. On November 18, 2023, there were reportedly talks of Altman returning as CEO amid pressure placed upon the board by investors such as Microsoft and Thrive Capital, who objected to Altman's departure. Although Altman himself spoke in favor of returning to OpenAI, he has since stated that he considered starting a new company and bringing former OpenAI employees with him if talks to reinstate him didn't work out. The board members agreed "in principle" to resign if Altman returned. On November 19, 2023, negotiations with Altman to return failed and Murati was replaced by Emmett Shear as interim CEO. The board initially contacted Anthropic CEO Dario Amodei (a former OpenAI executive) about replacing Altman, and proposed a merger of the two companies, but both offers were declined. On November 20, 2023, Microsoft CEO Satya Nadella announced Altman and Brockman would be joining Microsoft to lead a new advanced AI research team, but added that they were still committed to OpenAI despite recent events. Before the partnership with Microsoft was finalized, Altman gave the board another opportunity to negotiate with him. About 738 of OpenAI's 770 employees, including Murati and Sutskever, signed an open letter stating they would quit their jobs and join Microsoft if the board did not rehire Altman and then resign. This prompted OpenAI investors to consider legal action against the board as well. In response, OpenAI management sent an internal memo to employees stating that negotiations with Altman and the board had resumed and would take some time. On November 21, 2023, after continued negotiations, Altman and Brockman returned to the company in their prior roles along with a reconstructed board made up of new members Bret Taylor (as chairman) and Lawrence Summers, with D'Angelo remaining. According to subsequent reporting, shortly before Altman’s firing, some employees raised concerns to the board about how he had handled the safety implications of a recent internal AI capability discovery. On November 29, 2023, OpenAI announced that an anonymous Microsoft employee had joined the board as a non-voting member to observe the company's operations; Microsoft resigned from the board in July 2024. In February 2024, the Securities and Exchange Commission subpoenaed OpenAI's internal communication to determine if Altman's alleged lack of candor misled investors. In 2024, following the temporary removal of Sam Altman and his return, many employees gradually left OpenAI, including most of the original leadership team and a significant number of AI safety researchers. In August 2023, it was announced that OpenAI had acquired the New York-based start-up Global Illumination, a company that deploys AI to develop digital infrastructure and creative tools. In June 2024, OpenAI acquired Multi, a startup focused on remote collaboration. In March 2025, OpenAI reached a deal with CoreWeave to acquire $350 million worth of CoreWeave shares and access to AI infrastructure, in return for $11.9 billion paid over five years. Microsoft was already CoreWeave's biggest customer in 2024. Alongside their other business dealings, OpenAI and Microsoft were renegotiating the terms of their partnership to facilitate a potential future initial public offering by OpenAI, while ensuring Microsoft's continued access to advanced AI models. On May 21, OpenAI announced the $6.5 billion acquisition of io, an AI hardware start-up founded by former Apple designer Jony Ive in 2024. In September 2025, OpenAI agreed to acquire the product testing startup Statsig for $1.1 billion in an all-stock deal and appointed Statsig's founding CEO Vijaye Raji as OpenAI's chief technology officer of applications. The company also announced development of an AI-driven hiring service designed to rival LinkedIn. OpenAI acquired personal finance app Roi in October 2025. In October 2025, OpenAI acquired Software Applications Incorporated, the developer of Sky, a macOS-based natural language interface designed to operate across desktop applications. The Sky team joined OpenAI, and the company announced plans to integrate Sky’s capabilities into ChatGPT. In December 2025, it was announced OpenAI had agreed to acquire Neptune, an AI tooling startup that helps companies track and manage model training, for an undisclosed amount. In January 2026, it was announced OpenAI had acquired healthcare technology startup Torch for approximately $60 million. The acquisition followed the launch of OpenAI’s ChatGPT Health product and was intended to strengthen the company’s medical data and healthcare artificial intelligence capabilities. OpenAI has been criticized for outsourcing the annotation of data sets to Sama, a company based in San Francisco that employed workers in Kenya. These annotations were used to train an AI model to detect toxicity, which could then be used to moderate toxic content, notably from ChatGPT's training data and outputs. However, these pieces of text usually contained detailed descriptions of various types of violence, including sexual violence. The investigation uncovered that OpenAI began sending snippets of data to Sama as early as November 2021. The four Sama employees interviewed by Time described themselves as mentally scarred. OpenAI paid Sama $12.50 per hour of work, and Sama was redistributing the equivalent of between $1.32 and $2.00 per hour post-tax to its annotators. Sama's spokesperson said that the $12.50 was also covering other implicit costs, among which were infrastructure expenses, quality assurance and management. In 2024, OpenAI began collaborating with Broadcom to design a custom AI chip capable of both training and inference, targeted for mass production in 2026 and to be manufactured by TSMC on a 3 nm process node. This initiative intended to reduce OpenAI's dependence on Nvidia GPUs, which are costly and face high demand in the market. In January 2024, Arizona State University purchased ChatGPT Enterprise in OpenAI's first deal with a university. In June 2024, Apple Inc. signed a contract with OpenAI to integrate ChatGPT features into its products as part of its new Apple Intelligence initiative. In June 2025, OpenAI began renting Google Cloud's Tensor Processing Units (TPUs) to support ChatGPT and related services, marking its first meaningful use of non‑Nvidia AI chips. In September 2025, it was revealed that OpenAI signed a contract with Oracle to purchase $300 billion in computing power over the next five years. In September 2025, OpenAI and NVIDIA announced a memorandum of understanding that included a potential deployment of at least 10 gigawatts of NVIDIA systems and a $100 billion investment from NVIDIA in OpenAI. OpenAI expected the negotiations to be completed within weeks. As of January 2026, this has not been realized, and the two sides are rethinking the future of their partnership. In October 2025, OpenAI announced a multi-billion dollar deal with AMD. OpenAI committed to purchasing six gigawatts worth of AMD chips, starting with the MI450. OpenAI will have the option to buy up to 160 million shares of AMD, about 10% of the company, depending on development, performance and share price targets. In December 2025, Disney said it would make a $1 billion investment in OpenAI, and signed a three-year licensing deal that will let users generate videos using Sora—OpenAI's short-form AI video platform. More than 200 Disney, Marvel, Star Wars and Pixar characters will be available to OpenAI users. In early 2026, Amazon entered advanced discussions to invest up to $50 billion in OpenAI as part of a potential artificial intelligence partnership. Under the proposed agreement, OpenAI’s models could be integrated into Amazon’s digital assistant Alexa and other internal projects. OpenAI provides LLMs to the Artificial Intelligence Cyber Challenge and to the Advanced Research Projects Agency for Health. In October 2024, The Intercept revealed that OpenAI's tools are considered "essential" for AFRICOM's mission and included in an "Exception to Fair Opportunity" contractual agreement between the United States Department of Defense and Microsoft. In December 2024, OpenAI said it would partner with defense-tech company Anduril to build drone defense technologies for the United States and its allies. In 2025, OpenAI's Chief Product Officer, Kevin Weil, was commissioned lieutenant colonel in the U.S. Army to join Detachment 201 as senior advisor. In June 2025, the U.S. Department of Defense awarded OpenAI a $200 million one-year contract to develop AI tools for military and national security applications. OpenAI announced a new program, OpenAI for Government, to give federal, state, and local governments access to its models, including ChatGPT. Services In February 2019, GPT-2 was announced, which gained attention for its ability to generate human-like text. In 2020, OpenAI announced GPT-3, a language model trained on large internet datasets. GPT-3 is aimed at natural language answering questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named the API, would form the heart of its first commercial product. Eleven employees left OpenAI, mostly between December 2020 and January 2021, in order to establish Anthropic. In 2021, OpenAI introduced DALL-E, a specialized deep learning model adept at generating complex digital images from textual descriptions, utilizing a variant of the GPT-3 architecture. In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days. According to anonymous sources cited by Reuters in December 2022, OpenAI Global, LLC was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024. After ChatGPT was launched, Google announced a similar chatbot, Bard, amid internal concerns that ChatGPT could threaten Google’s position as a primary source of online information. On February 7, 2023, Microsoft announced that it was building AI technology based on the same foundation as ChatGPT into Microsoft Bing, Edge, Microsoft 365 and other products. On March 14, 2023, OpenAI released GPT-4, both as an API (with a waitlist) and as a feature of ChatGPT Plus. On November 6, 2023, OpenAI launched GPTs, allowing individuals to create customized versions of ChatGPT for specific purposes, further expanding the possibilities of AI applications across various industries. On November 14, 2023, OpenAI announced they temporarily suspended new sign-ups for ChatGPT Plus due to high demand. Access for newer subscribers re-opened a month later on December 13. In December 2024, the company launched the Sora model. It also launched OpenAI o1, an early reasoning model that was internally codenamed strawberry. Additionally, ChatGPT Pro—a $200/month subscription service offering unlimited o1 access and enhanced voice features—was introduced, and preliminary benchmark results for the upcoming OpenAI o3 models were shared. On January 23, 2025, OpenAI released Operator, an AI agent and web automation tool for accessing websites to execute goals defined by users. The feature was only available to Pro users in the United States. OpenAI released deep research agent, nine days later. It scored a 27% accuracy on the benchmark Humanity's Last Exam (HLE). Altman later stated GPT-4.5 would be the last model without full chain-of-thought reasoning. In July 2025, reports indicated that AI models by both OpenAI and Google DeepMind solved mathematics problems at the level of top-performing students in the International Mathematical Olympiad. OpenAI's large language model was able to achieve gold medal-level performance, reflecting significant progress in AI's reasoning abilities. On October 6, 2025, OpenAI unveiled its Agent Builder platform during the company's DevDay event. The platform includes a visual drag-and-drop interface that lets developers and businesses design, test, and deploy agentic workflows with limited coding. On October 21, 2025, OpenAI introduced ChatGPT Atlas, a browser integrating the ChatGPT assistant directly into web navigation, to compete with existing browsers such as Google Chrome and Apple Safari. On December 11, 2025, OpenAI announced GPT-5.2. This model will be better at creating spreadsheets, building presentations, perceiving images, writing code and understanding long context. On January 27, 2026, OpenAI introduced Prism, a LaTeX-native workspace meant to assist scientists to help with research and writing. The platform utilizes GPT-5.2 as a backend to automate the process of drafting for scientific papers, including features for managing citations, complex equation formatting, and real-time collaborative editing. In March 2023, the company was criticized for disclosing particularly few technical details about products like GPT-4, contradicting its initial commitment to openness and making it harder for independent researchers to replicate its work and develop safeguards. OpenAI cited competitiveness and safety concerns to justify this repudiation. OpenAI's former chief scientist Ilya Sutskever argued in 2023 that open-sourcing increasingly capable models was increasingly risky, and that the safety reasons for not open-sourcing the most potent AI models would become "obvious" in a few years. In September 2025, OpenAI published a study on how people use ChatGPT for everyday tasks. The study found that "non-work tasks" (according to an LLM-based classifier) account for more than 72 percent of all ChatGPT usage, with a minority of overall usage related to business productivity. In July 2023, OpenAI launched the superalignment project, aiming within four years to determine how to align future superintelligent systems. OpenAI promised to dedicate 20% of its computing resources to the project, although the team denied receiving anything close to 20%. OpenAI ended the project in May 2024 after its co-leaders Ilya Sutskever and Jan Leike left the company. In August 2025, OpenAI was criticized after thousands of private ChatGPT conversations were inadvertently exposed to public search engines like Google due to an experimental "share with search engines" feature. The opt-in toggle, intended to allow users to make specific chats discoverable, resulted in some discussions including personal details such as names, locations, and intimate topics appearing in search results when users accidentally enabled it while sharing links. OpenAI announced the feature's permanent removal on August 1, 2025, and the company began coordinating with search providers to remove the exposed content, emphasizing that it was not a security breach but a design flaw that heightened privacy risks. CEO Sam Altman acknowledged the issue in a podcast, noting users often treat ChatGPT as a confidant for deeply personal matters, which amplified concerns about AI handling sensitive data. Management In 2018, Musk resigned from his Board of Directors seat, citing "a potential future conflict [of interest]" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars. OpenAI stated that Musk's financial contributions were below $45 million. On March 3, 2023, Reid Hoffman resigned from his board seat, citing a desire to avoid conflicts of interest with his investments in AI companies via Greylock Partners, and his co-founding of the AI startup Inflection AI. Hoffman remained on the board of Microsoft, a major investor in OpenAI. In May 2024, Chief Scientist Ilya Sutskever resigned and was succeeded by Jakub Pachocki. Co-leader Jan Leike also departed amid concerns over safety and trust. OpenAI then signed deals with Reddit, News Corp, Axios, and Vox Media. Paul Nakasone then joined the board of OpenAI. In August 2024, cofounder John Schulman left OpenAI to join Anthropic, and OpenAI's president Greg Brockman took extended leave until November. In September 2024, CTO Mira Murati left the company. In November 2025, Lawrence Summers resigned from the board of directors. Governance and legal issues In May 2023, Sam Altman, Greg Brockman and Ilya Sutskever posted recommendations for the governance of superintelligence. They stated that superintelligence could happen within the next 10 years, allowing a "dramatically more prosperous future" and that "given the possibility of existential risk, we can't just be reactive". They proposed creating an international watchdog organization similar to IAEA to oversee AI systems above a certain capability threshold, suggesting that relatively weak AI systems on the other side should not be overly regulated. They also called for more technical safety research for superintelligences, and asked for more coordination, for example through governments launching a joint project which "many current efforts become part of". In July 2023, the FTC issued a civil investigative demand to OpenAI to investigate whether the company's data security and privacy practices to develop ChatGPT were unfair or harmed consumers (including by reputational harm) in violation of Section 5 of the Federal Trade Commission Act of 1914. These are typically preliminary investigative matters and are nonpublic, but the FTC's document was leaked. In July 2023, the FTC launched an investigation into OpenAI over allegations that the company scraped public data and published false and defamatory information. They asked OpenAI for comprehensive information about its technology and privacy safeguards, as well as any steps taken to prevent the recurrence of situations in which its chatbot generated false and derogatory content about people. The agency also raised concerns about ‘circular’ spending arrangements—for example, Microsoft extending Azure credits to OpenAI while both companies shared engineering talent—and warned that such structures could negatively affect the public. In September 2024, OpenAI's global affairs chief endorsed the UK's "smart" AI regulation during testimony to a House of Lords committee. In February 2025, OpenAI CEO Sam Altman stated that the company is interested in collaborating with the People's Republic of China, despite regulatory restrictions imposed by the U.S. government. This shift comes in response to the growing influence of the Chinese artificial intelligence company DeepSeek, which has disrupted the AI market with open models, including DeepSeek V3 and DeepSeek R1. Following DeepSeek's market emergence, OpenAI enhanced security protocols to protect proprietary development techniques from industrial espionage. Some industry observers noted similarities between DeepSeek's model distillation approach and OpenAI's methodology, though no formal intellectual property claim was filed. According to Oliver Roberts, in March 2025, the United States had 781 state AI bills or laws. OpenAI advocated for preempting state AI laws with federal laws. According to Scott Kohler, OpenAI has opposed California's AI legislation and suggested that the state bill encroaches on a more competent federal government. Public Citizen opposed a federal preemption on AI and pointed to OpenAI's growth and valuation as evidence that existing state laws have not hampered innovation. Before May 2024, OpenAI required departing employees to sign a lifelong non-disparagement agreement forbidding them from criticizing OpenAI and acknowledging the existence of the agreement. Daniel Kokotajlo, a former employee, publicly stated that he forfeited his vested equity in OpenAI in order to leave without signing the agreement. Sam Altman stated that he was unaware of the equity cancellation provision, and that OpenAI never enforced it to cancel any employee's vested equity. However, leaked documents and emails refute this claim. On May 23, 2024, OpenAI sent a memo releasing former employees from the agreement. OpenAI was sued for copyright infringement by authors Sarah Silverman, Matthew Butterick, Paul Tremblay and Mona Awad in July 2023. In September 2023, 17 authors, including George R. R. Martin, John Grisham, Jodi Picoult and Jonathan Franzen, joined the Authors Guild in filing a class action lawsuit against OpenAI, alleging that the company's technology was illegally using their copyrighted work. The New York Times also sued the company in late December 2023. In May 2024 it was revealed that OpenAI had destroyed its Books1 and Books2 training datasets, which were used in the training of GPT-3, and which the Authors Guild believed to have contained over 100,000 copyrighted books. In 2021, OpenAI developed a speech recognition tool called Whisper. OpenAI used it to transcribe more than one million hours of YouTube videos into text for training GPT-4. The automated transcription of YouTube videos raised concerns within OpenAI employees regarding potential violations of YouTube's terms of service, which prohibit the use of videos for applications independent of the platform, as well as any type of automated access to its videos. Despite these concerns, the project proceeded with notable involvement from OpenAI's president, Greg Brockman. The resulting dataset proved instrumental in training GPT-4. In February 2024, The Intercept as well as Raw Story and Alternate Media Inc. filed lawsuit against OpenAI on copyright litigation ground. The lawsuit is said to have charted a new legal strategy for digital-only publishers to sue OpenAI. On April 30, 2024, eight newspapers filed a lawsuit in the Southern District of New York against OpenAI and Microsoft, claiming illegal harvesting of their copyrighted articles. The suing publications included The Mercury News, The Denver Post, The Orange County Register, St. Paul Pioneer Press, Chicago Tribune, Orlando Sentinel, Sun Sentinel, and New York Daily News. In June 2023, a lawsuit claimed that OpenAI scraped 300 billion words online without consent and without registering as a data broker. It was filed in San Francisco, California, by sixteen anonymous plaintiffs. They also claimed that OpenAI and its partner as well as customer Microsoft continued to unlawfully collect and use personal data from millions of consumers worldwide to train artificial intelligence models. On May 22, 2024, OpenAI entered into an agreement with News Corp to integrate news content from The Wall Street Journal, the New York Post, The Times, and The Sunday Times into its AI platform. Meanwhile, other publications like The New York Times chose to sue OpenAI and Microsoft for copyright infringement over the use of their content to train AI models. In November 2024, a coalition of Canadian news outlets, including the Toronto Star, Metroland Media, Postmedia, The Globe and Mail, The Canadian Press and CBC, sued OpenAI for using their news articles to train its software without permission. In October 2024 during a New York Times interview, Suchir Balaji accused OpenAI of violating copyright law in developing its commercial LLMs which he had helped engineer. He was a likely witness in a major copyright trial against the AI company, and was one of several of its current or former employees named in court filings as potentially having documents relevant to the case. On November 26, 2024, Balaji died by suicide. His death prompted the circulation of conspiracy theories alleging that he had been deliberately silenced. California Congressman Ro Khanna endorsed calls for an investigation. On April 24, 2025, Ziff Davis sued OpenAI in Delaware federal court for copyright infringement. Ziff Davis is known for publications such as ZDNet, PCMag, CNET, IGN and Lifehacker. In April 2023, the EU's European Data Protection Board (EDPB) formed a dedicated task force on ChatGPT "to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities" based on the "enforcement action undertaken by the Italian data protection authority against OpenAI about the ChatGPT service". In late April 2024 NOYB filed a complaint with the Austrian Datenschutzbehörde against OpenAI for violating the European General Data Protection Regulation. A text created with ChatGPT gave a false date of birth for a living person without giving the individual the option to see the personal data used in the process. A request to correct the mistake was denied. Additionally, neither the recipients of ChatGPT's work nor the sources used, could be made available, OpenAI claimed. OpenAI was criticized for lifting its ban on using ChatGPT for "military and warfare". Up until January 10, 2024, its "usage policies" included a ban on "activity that has high risk of physical harm, including", specifically, "weapons development" and "military and warfare". Its new policies prohibit "[using] our service to harm yourself or others" and to "develop or use weapons". In August 2025, the parents of a 16-year-old boy who died by suicide filed a wrongful death lawsuit against OpenAI (and CEO Sam Altman), alleging that months of conversations with ChatGPT about mental health and methods of self-harm contributed to their son's death and that safeguards were inadequate for minors. OpenAI expressed condolences and said it was strengthening protections (including updated crisis response behavior and parental controls). Coverage described it as a first-of-its-kind wrongful death case targeting the company's chatbot. The complaint was filed in California state court in San Francisco. In November 2025, the Social Media Victims Law Center and Tech Justice Law Project filed seven lawsuits against OpenAI, of which four lawsuits alleged wrongful death. The suits were filed on behalf of Zane Shamblin, 23, of Texas; Amaurie Lacey, 17, of Georgia; Joshua Enneking, 26, of Florida; and Joe Ceccanti, 48, of Oregon, who each committed suicide after prolonged ChatGPT usage. In December 2025, Stein-Erik Soelberg, who was 56 years old at the time, allegedly murdered his mother Suzanne Adams. In the months prior the paranoid, delusional man often discussed his ideas with ChatGPT. Adam's estate then sued OpenAI claiming that the company shared responsibility due to the risk of chatbot psychosis despite the fact that chatbot psychosis is not a real medical diagnosis. OpenAI responded saying they will make ChatGPT safer for users disconnected from reality. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Democratic_Party_(United_States)] | [TOKENS: 16552] |
Contents Democratic Party (United States) The Democratic Party is the major liberal political party in the United States. Sitting on the center to center-left of the political spectrum, it is the world's oldest active political party, having been founded in 1828. Its main rival is the Republican Party, and since the 1850s both have dominated American politics. The Democratic Party initially supported Jacksonian democracy, agrarianism, and geographical expansionism, while opposing a national bank and high tariffs. Democrats won six of the eight presidential elections from 1828 to 1856, losing twice to the Whigs. In 1860, the party split into Northern and Southern factions over slavery. The party remained dominated by agrarian interests, contrasting with Republican support for the big business of the Gilded Age. Democratic candidates won the presidency only twice[b] between 1860 and 1908, although they won the popular vote two more times in that period. During the Progressive Era, some factions of the party supported progressive reforms, with Woodrow Wilson being elected president in 1912 and 1916. In 1932, Franklin D. Roosevelt was elected president after campaigning on a strong response to the Great Depression. His New Deal programs created a broad Democratic coalition that united white Southerners, Northern workers, labor unions, African Americans, Catholic and Jewish communities, progressives, and liberals. From the late 1930s, a conservative minority in the party's Southern wing joined with Republicans to slow and stop further progressive domestic reforms. After the civil rights movement and Great Society era of progressive legislation under Lyndon B. Johnson, who was often able to overcome the conservative coalition in the 1960s, many white Southerners switched to the Republican Party as the Northeastern states became more reliably Democratic. The party's labor union element has weakened since the 1970s amid deindustrialization, and during the 1980s it lost many white working-class voters to the Republicans under Ronald Reagan. The election of Bill Clinton in 1992 marked a shift for the party toward centrism and the Third Way, shifting its economic stance toward market-based policies. Barack Obama oversaw the party's passage of the Affordable Care Act in 2010. In the 21st century, the Democratic Party's strongest demographics are urban voters, college graduates (especially those with graduate degrees), African Americans, women, younger voters, irreligious voters, the unmarried and LGBTQ people. On social issues, it advocates for abortion rights, gun control, LGBTQ rights, action on climate change, and the legalization of marijuana. On economic issues, the party favors healthcare reform, paid sick leave, paid family leave and supporting unions. In foreign policy, the party supports liberal internationalism and aid to Ukraine, as well as tougher stances against China and Russia. History Democratic Party officials often trace its origins to the Democratic-Republican Party, founded by Thomas Jefferson, James Madison and other influential opponents of the conservative Federalists in 1792. That party died out before the modern Democratic Party was organized; the Jeffersonian party also inspired the Whigs and modern Republicans. Historians argue that the modern Democratic Party was first organized in the late 1820s with the election of war hero Andrew Jackson of Tennessee, making it the world's oldest active political party. It was predominately built by Martin Van Buren, who assembled a wide cadre of politicians in every state behind Jackson. Since the nomination of William Jennings Bryan in 1896, the party has generally positioned itself to the left of the Republican Party on economic issues. Democrats have been more liberal on civil rights since 1948, although conservative factions within the Democratic Party that opposed them persisted in the South until the 1960s. On foreign policy, both parties have changed positions several times. The Democratic Party evolved from the Jeffersonian Republican or Democratic-Republican Party organized by Jefferson and Madison in opposition to the Federalist Party. The Democratic-Republican Party favored republicanism, a weak federal government, states' rights, agrarian interests (especially Southern planters), and strict adherence to the Constitution. The party opposed a national bank and Great Britain. After the War of 1812, the Federalists virtually disappeared and the only national political party left was the Democratic-Republicans, which was prone to splinter along regional lines. The era of one-party rule in the United States, known as the Era of Good Feelings, lasted from 1816 until 1828, when Andrew Jackson became president. Jackson and Martin Van Buren worked with allies in each state to form a new Democratic Party on a national basis. In the 1830s, the Whig Party coalesced into the main rival to the Democrats. When exactly the Democratic Party formed is still debated among historians, with many putting forth the 1828 date of the creation of a federal structure for the various Jacksonian movements as the foundation date, however, it could also be argued that the foundation of these Jacksonian groups could be the foundation date. In that case, the Democratic Party would be formed on December 23, 1823, when the Greensburg Committee read the Greensburg Resolution outside the Westmoreland County Courthouse in Greensburg, Pennsylvania. The committee consisted of five of Greensburg's most prominent political figures, the brothers Jacob M. Wise (state senator), John H. Wise (state representative and brigadier general), and Frederick A. Wise (owner and editor of the Westmoreland Republican), alongside David Marchand (state representative), and James Clarke (state representative). The Greensburg Resolution was the first published call for Jackson to run for President with the committee being the first overtly "Jacksonian" organization, dubbed the 'origin' of the Jackson movement that turned into the Democratic Party. The event that transformed the Jacksonians from just another faction of the Democratic-Republican party into a divergent political force would be the so-called "corrupt bargain" of 1824, where, despite winning the most popular and electoral votes, the House of Representatives did not confirm Jackson as the newly elected president, instead Henry Clay, who was both a candidate and the speaker of the house, whipped his supporters in congress to vote for the runner-up, John Quincy Adams, in exchange for Adams's naming Clay the Secretary of State. Jackson and his followers began to coalesce more seriously into a structured party for the next election in 1828. Before 1860, the Democratic Party supported expansive presidential power, the interests of slave states, agrarianism, and expansionism, while opposing a national bank and high tariffs. The Democratic-Republican Party split over the choice of a successor to President James Monroe. The faction that supported many of the old Jeffersonian principles, led by Andrew Jackson and Martin Van Buren, became the modern Democratic Party. Historian Mary Beth Norton explains the transformation in 1828: Jacksonians believed the people's will had finally prevailed. Through a lavishly financed coalition of state parties, political leaders, and newspaper editors, a popular movement had elected the president. The Democrats became the nation's first well-organized national party ... and tight party organization became the hallmark of nineteenth-century American politics. Behind the platforms issued by state and national parties stood a widely shared political outlook that characterized the Democrats: The Democrats represented a wide range of views but shared a fundamental commitment to the Jeffersonian concept of an agrarian society. They viewed the central government as the enemy of individual liberty. The 1824 "corrupt bargain" had strengthened their suspicion of Washington politics. ... Jacksonians feared the concentration of economic and political power. They believed that government intervention in the economy benefited special-interest groups and created corporate monopolies that favored the rich. They sought to restore the independence of the individual—the artisan and the ordinary farmer—by ending federal support of banks and corporations and restricting the use of paper currency, which they distrusted. Their definition of the proper role of government tended to be negative, and Jackson's political power was largely expressed in negative acts. He exercised the veto more than all previous presidents combined. ... Nor did Jackson share reformers' humanitarian concerns. He had no sympathy for American Indians, initiating the removal of the Cherokees along the Trail of Tears. Opposing factions led by Henry Clay helped form the Whig Party. The Democratic Party had a small yet decisive advantage over the Whigs until the 1850s when the Whigs fell apart over the issue of slavery. In 1854, angry with the Kansas–Nebraska Act, anti-slavery Democrats left the party and joined Northern Whigs to form the Republican Party. Martin van Buren also helped found the Free Soil Party to oppose the spread of slavery, running as its candidate in the 1848 presidential election, before returning to the Democratic Party and staying loyal to the Union. The Democrats split over slavery, with Northern and Southern tickets in the election of 1860, in which the Republican Party gained ascendancy. The radical pro-slavery Fire-Eaters led walkouts at the two conventions when the delegates would not adopt a resolution supporting the extension of slavery into territories even if the voters of those territories did not want it. These Southern Democrats nominated the pro-slavery incumbent vice president, John C. Breckinridge of Kentucky, for president and General Joseph Lane, of Oregon, for vice president. The Northern Democrats nominated Senator Stephen A. Douglas of Illinois for president and former Georgia Governor Herschel V. Johnson for vice president. This fracturing of the Democrats led to a Republican victory and Abraham Lincoln was elected the 16th president of the United States. As the American Civil War broke out, Northern Democrats were divided into War Democrats and Peace Democrats. The Confederate States of America deliberately avoided organized political parties. Most War Democrats rallied to Republican president Abraham Lincoln and the Republicans' National Union Party in the election of 1864, which featured Andrew Johnson on the Union ticket to attract fellow Democrats. Johnson replaced Lincoln in 1865, but he stayed independent of both parties. The Democrats benefited from white Southerners' resentment of Reconstruction after the war and consequent hostility to the Republican Party. After Redeemers ended Reconstruction in the 1870s, and following the often extremely violent disenfranchisement of African Americans led by such white supremacist Democratic politicians as Benjamin Tillman of South Carolina in the 1880s and 1890s, the South, voting Democratic, became known as the "Solid South". Although Republicans won all but two presidential elections, the Democrats remained competitive. The party was dominated by pro-business Bourbon Democrats led by Samuel J. Tilden and Grover Cleveland, who represented mercantile, banking, and railroad interests; opposed imperialism and overseas expansion; fought for the gold standard; opposed bimetallism; and crusaded against corruption, high taxes and tariffs. Cleveland was elected to non-consecutive presidential terms in 1884 and 1892. Agrarian Democrats demanding free silver, drawing on Populist ideas, overthrew the Bourbon Democrats in 1896 and nominated William Jennings Bryan for the presidency (a nomination repeated by Democrats in 1900 and 1908). Bryan waged a vigorous campaign attacking Eastern moneyed interests, but he lost to Republican William McKinley. The Democrats took control of the House in 1910, and Woodrow Wilson won election as president in 1912 (when the Republicans split) and 1916. Wilson effectively led Congress to put to rest the issues of tariffs, money, and antitrust, which had dominated politics for 40 years, with new progressive laws. He failed to secure Senate passage of the Versailles Treaty (ending the war with Germany and joining the League of Nations). The weakened party was deeply divided by issues such as the KKK and prohibition in the 1920s. However, it did organize new ethnic voters in Northern cities. After World War I ended and continuing through the Great Depression, the Democratic and Republican Parties both largely believed in American exceptionalism over European monarchies and state socialism that existed elsewhere in the world. The Great Depression in 1929 that began under Republican president Herbert Hoover and the Republican Congress set the stage for a more liberal government as the Democrats controlled the House of Representatives nearly uninterrupted from 1930 until 1994, the Senate for 44 of 48 years from 1930, and won most presidential elections until 1968. Franklin D. Roosevelt, elected to the presidency in 1932, came forth with federal government programs called the New Deal. New Deal liberalism meant the regulation of business (especially finance and banking) and the promotion of labor unions as well as federal spending to aid the unemployed, help distressed farmers, and undertake large-scale public works projects. It marked the start of the American welfare state. The opponents, who stressed opposition to unions, support for business and low taxes, started calling themselves "conservatives". Until the 1980s, the Democratic Party was a coalition of two parties divided by the Mason–Dixon line: liberal Democrats in the North and culturally conservative voters in the South, who, though benefiting from many of the New Deal public-works projects, opposed increasing civil rights initiatives advocated by Northeastern liberals. The polarization grew stronger after Roosevelt died. Southern Democrats formed a key part of the bipartisan conservative coalition in an alliance with most of the Midwestern Republicans. The economically activist philosophy of Franklin D. Roosevelt, which has strongly influenced American liberalism, shaped much of the party's economic agenda after 1932. From the 1930s to the mid-1960s, the liberal New Deal coalition usually controlled the presidency while the conservative coalition usually controlled Congress. Issues facing parties and the United States after World War II included the Cold War and the civil rights movement. Republicans attracted conservatives and, after the 1960s, white Southerners from the Democratic coalition with their use of the Southern strategy and resistance to New Deal and Great Society liberalism. Until the 1950s, African Americans had traditionally supported the Republican Party because of its anti-slavery civil rights policies. Following the passage of the Civil Rights Act of 1964 and Voting Rights Act of 1965, the Southern states became more reliably Republican in presidential politics, while Northeastern states became more reliably Democratic. Studies show that Southern whites, who were a core constituency in the Democratic Party, shifted to the Republican Party due to racial backlash and social conservatism. The election of President John F. Kennedy from Massachusetts in 1960 partially reflected this shift. In the campaign, Kennedy attracted a new generation of younger voters. In his agenda dubbed the New Frontier, Kennedy introduced a host of social programs and public works projects, along with enhanced support of the space program, proposing a crewed spacecraft trip to the moon by the end of the decade. He pushed for civil rights initiatives and proposed the Civil Rights Act of 1964, but with his assassination in November 1963, he was not able to see its passage. Kennedy's successor Lyndon B. Johnson was able to persuade the largely conservative Congress to pass the Civil Rights Act of 1964, and with a more progressive Congress in 1965 passed much of the Great Society, including Medicare and Medicaid, which consisted of an array of social programs designed to help the poor, sick, and elderly. Kennedy and Johnson's advocacy of civil rights further solidified black support for the Democrats but had the effect of alienating Southern whites who would eventually gravitate toward the Republican Party, particularly after the election of Ronald Reagan to the presidency in 1980. Many conservative Southern Democrats defected to the Republican Party, beginning with the passage of the Civil Rights Act of 1964 and the general leftward shift of the party. The United States' involvement in the Vietnam War in the 1960s was another divisive issue that further fractured the fault lines of the Democrats' coalition. After the Gulf of Tonkin Resolution in 1964, President Johnson committed a large contingency of combat troops to Vietnam, but the escalation failed to drive the Viet Cong from South Vietnam, resulting in an increasing quagmire, which by 1968 had become the subject of widespread anti-war protests in the United States and elsewhere. With increasing casualties and nightly news reports bringing home troubling images from Vietnam, the costly military engagement became increasingly unpopular, alienating many of the kinds of young voters that the Democrats had attracted in the early 1960s. The protests that year along with assassinations of Martin Luther King Jr. and Democratic presidential candidate Senator Robert F. Kennedy (younger brother of John F. Kennedy) climaxed in turbulence at the hotly-contested Democratic National Convention that summer in Chicago (which amongst the ensuing turmoil inside and outside of the convention hall nominated Vice President Hubert Humphrey) in a series of events that proved to mark a significant turning point in the decline of the Democratic Party's broad coalition. Republican presidential nominee Richard Nixon was able to capitalize on the confusion of the Democrats that year and won the 1968 election to become the 37th president. He won re-election in a landslide in 1972 against Democratic nominee George McGovern, who like Robert F. Kennedy, reached out to the younger anti-war and counterculture voters, but unlike Kennedy, was not able to appeal to the party's more traditional white working-class constituencies. During Nixon's second term, his presidency was rocked by the Watergate scandal, which forced him to resign in 1974. He was succeeded by vice president Gerald Ford, who served a brief tenure. Watergate offered the Democrats an opportunity to recoup, and their nominee Jimmy Carter won the 1976 presidential election. With the initial support of evangelical Christian voters in the South, Carter was temporarily able to reunite the disparate factions within the party, but inflation and the Iran Hostage Crisis of 1979–1980 took their toll, resulting in a landslide victory for Republican presidential nominee Ronald Reagan in 1980, which shifted the political landscape in favor of the Republicans for years to come. The influx of conservative Democrats into the Republican Party is often cited as a reason for the Republican Party's shift further to the right during the late 20th century as well as the shift of its base from the Northeast and Midwest to the South. With the ascendancy of the Republicans under Ronald Reagan, the Democrats searched for ways to respond yet were unable to succeed by running traditional candidates, such as former vice president and Democratic presidential nominee Walter Mondale and Massachusetts governor Michael Dukakis, who lost to Reagan and George H. W. Bush in the 1984 and 1988 presidential elections, respectively. Many Democrats attached their hopes to the future star of Gary Hart, who had challenged Mondale in the 1984 primaries running on a theme of "New Ideas"; and in the subsequent 1988 primaries became the de facto front-runner and virtual "shoo-in" for the Democratic presidential nomination before a sex scandal ended his campaign. The party nevertheless began to seek out a younger generation of leaders, who like Hart had been inspired by the pragmatic idealism of John F. Kennedy. Arkansas governor Bill Clinton was one such figure, who was elected president in 1992 as the Democratic nominee. The Democratic Leadership Council was a campaign organization connected to Clinton that advocated a realignment and triangulation under the re-branded "New Democrat" label. The party adopted a synthesis of neoliberal economic policies with cultural liberalism, with the voter base after Reagan having shifted considerably to the right. In an effort to appeal both to liberals and to fiscal conservatives, Democrats began to advocate for a balanced budget and market economy tempered by government intervention (mixed economy), along with a continued emphasis on social justice and affirmative action. The economic policy adopted by the Democratic Party, including the former Clinton administration, has been referred to as "Third Way". The Democrats lost control of Congress in the 1994 elections to the Republicans. However, in 1996, Clinton was re-elected; he was the first Democratic president since Franklin D. Roosevelt to win a second full term. In December 1998, Republicans in the House of Representatives impeached Clinton for his role in the Clinton–Lewinsky scandal, but was acquitted by the Senate in February 1999. Clinton's vice president Al Gore ran to succeed him as president, and won the popular vote, but after a controversial election dispute over a Florida recount settled by the U.S. Supreme Court (which ruled 5–4 in favor of Bush), he lost the 2000 election to Republican opponent George W. Bush in the Electoral College. In the wake of the 2001 terrorist attacks on the World Trade Center and the Pentagon as well as the growing concern over global warming, some of the party's key issues in the early 21st century have included combating terrorism while preserving human rights, expanding access to health care, labor rights, and environmental protection. Democrats regained majority control of both the House and the Senate in the 2006 elections. Barack Obama won the Democratic Party's nomination and was elected as the first African-American president in 2008. Under the Obama presidency, the party moved forward with reforms including an economic stimulus package, the Dodd–Frank financial reform act, and, in its biggest impact, reshaped the nation's healthcare with the Affordable Care Act. In the 2010 midterm elections, the Democratic Party lost control of the House as well as its majorities in several state legislatures and governorships. The 2010 elections also marked the end of the Democratic Party's electoral dominance in the Southern United States. In the 2012 elections, President Obama was re-elected, but the party remained in the minority in the House of Representatives and lost control of the Senate in the 2014 midterm elections. After the 2016 election of Donald Trump, who lost the popular vote to Democratic nominee Hillary Clinton, the Democratic Party transitioned into the role of an opposition party and held neither the presidency nor Congress for two years. However, the party won back the House in the 2018 midterm elections under the leadership of Nancy Pelosi. Democrats were extremely critical of President Trump, particularly his policies on immigration, healthcare, as well as his response to the COVID-19 pandemic. In December 2019, Democrats in the House of Representatives impeached Trump, although he was acquitted in the Republican-controlled Senate. In November 2020, Democratic candidate Joe Biden defeated Trump to win the 2020 presidential election. He began his term with extremely narrow Democratic majorities in the U.S. House and Senate. During the Biden presidency, the party was characterized as adopting an increasingly progressive economic agenda. In 2022, Biden appointed Ketanji Brown Jackson, the first black woman on the Supreme Court. However, she was replacing liberal justice Stephen Breyer, thus she did not alter the court's 6–3 split between conservatives (the majority) and liberals. After Dobbs v. Jackson (decided June 24, 2022), which led to abortion bans in much of the country, the Democratic Party rallied behind abortion rights. In the 2022 midterm elections, Democrats dramatically outperformed historical trends and a widely anticipated red wave did not materialize. The party only narrowly lost its majority in the U.S. House and expanded its majority in the U.S. Senate, along with several gains at the state level. In July 2024, after a series of age and health concerns, Biden withdrew from the presidential election, becoming the first incumbent president since Lyndon B. Johnson in 1968 to withdraw from running for reelection, the first since the 19th century to withdraw after serving only one term,[c] and the only one to ever withdraw after already winning the primaries. Vice President Kamala Harris—who became Biden's replacement on the ballot after his withdrawal from the race—became the first black woman to be nominated by a major party, but she was defeated in the 2024 election by Donald Trump. Harris lost the electoral college 312–226 (including all seven of the anticipated swing states) as well as the popular vote, becoming the first Democratic candidate to do so since John Kerry in 2004, amid what was a global anti-incumbent backlash. As of 2026, Democrats hold 24 state governorships, 17 state legislatures, 16 state government trifectas, and the mayorships in the majority of the country's major cities. Three of the nine current U.S. Supreme Court justices were appointed by Democratic presidents. By registered members, the Democratic Party is the largest party in the U.S. and the third largest in the world. All totaled, 16 Democrats have served as president of the United States. Name and symbols The Democratic-Republican Party splintered in 1824 into the short-lived National Republican Party and the Jacksonian movement which in 1828 became the Democratic Party. During the Jacksonian era, the term "The Democracy" was in use by the party, but the name "Democratic Party" was eventually settled upon and became the official name in 1844. Members of the party are called "Democrats" or "Dems". The most common mascot symbol for the party has been the donkey, or jackass. Andrew Jackson's enemies twisted his name to "jackass" as a term of ridicule regarding a stupid and stubborn animal. However, the Democrats liked the common-man implications and picked it up too, therefore the image persisted and evolved. Its most lasting impression came from the cartoons of Thomas Nast from 1870 in Harper's Weekly. Cartoonists followed Nast and used the donkey to represent the Democrats and the elephant to represent the Republicans. In the early 20th century, the traditional symbol of the Democratic Party in Indiana, Kentucky, Oklahoma, and Ohio was the rooster, as opposed to the Republican eagle. The rooster was also adopted as an official symbol of the national Democratic Party. In 1904, the Alabama Democratic Party chose, as the logo to put on its ballots, a rooster with the motto "White supremacy – For the right." The words "White supremacy" were replaced with "Democrats" in 1966. In 1996, the Alabama Democratic Party dropped the rooster, citing racist and white supremacist connotations linked with the symbol. The rooster symbol still appears on Oklahoma, Kentucky, Indiana, and West Virginia ballots. In New York, the Democratic ballot symbol is a five-pointed star. Although both major political parties (and many minor ones) use the traditional American colors of red, white, and blue in their marketing and representations, since the election night 2000 blue has become the identifying color for the Democratic Party while red has become the identifying color for the Republican Party. That night, for the first time all major broadcast television networks used the same color scheme for the electoral map: blue states for Al Gore (Democratic nominee) and red states for George W. Bush (Republican nominee). Since then, the color blue has been widely used by the media to represent the party. This is contrary to common practice outside of the United States where blue is the traditional color of the right and red the color of the left. In 2010, the party introduced a new logo, with a blue capital "D" enclosed in a blue circle. It remained the party's primary brand mark, with minor changes to the shade of blue, for 15 years. In 2025, a new logo was introduced, which incorporates a white donkey facing to the right instead of the left, with three blue stars in the center instead of four, on a blue background. Jefferson-Jackson Day is the annual fundraising event (dinner) held by Democratic Party organizations across the United States. It is named after presidents Thomas Jefferson and Andrew Jackson, whom the party regards as its distinguished early leaders. The song "Happy Days Are Here Again" is the unofficial song of the Democratic Party. It was used prominently when Franklin D. Roosevelt was nominated for president at the 1932 Democratic National Convention and remains a sentimental favorite for Democrats. For example, Paul Shaffer played the theme on the Late Show with David Letterman after the Democrats won Congress in 2006. "Don't Stop" by Fleetwood Mac was adopted by Bill Clinton's presidential campaign in 1992 and has endured as a popular Democratic song. The emotionally similar song "Beautiful Day" by the band U2 has also become a favorite theme song for Democratic candidates. John Kerry used the song during his 2004 presidential campaign and several Democratic congressional candidates used it as a celebratory tune in 2006. As a traditional anthem for its presidential nominating convention, Aaron Copland's "Fanfare for the Common Man" is traditionally performed at the beginning of the Democratic National Convention.[citation needed] Structure The Democratic National Committee (DNC) is responsible for promoting Democratic campaign activities. While the DNC is responsible for overseeing the process of writing the Democratic Platform, the DNC is more focused on campaign and organizational strategy than public policy. In presidential elections, it supervises the Democratic National Convention. The national convention is subject to the charter of the party and the ultimate authority within the Democratic Party when it is in session, with the DNC running the party's organization at other times. Since February 1, 2025, the DNC has been chaired by Ken Martin. Each state also has a state committee, made up of elected committee members as well as ex officio committee members (usually elected officials and representatives of major constituencies), which in turn elects a chair. County, town, city, and ward committees generally are composed of individuals elected at the local level. State and local committees often coordinate campaign activities within their jurisdiction, oversee local conventions, and in some cases primaries or caucuses, and may have a role in nominating candidates for elected office under state law. Rarely do they have much direct funding, but in 2005 DNC Chairman Dean began a program (called the "50 State Strategy") of using DNC national funds to assist all state parties and pay for full-time professional staffers. In addition, state-level party committees operate in the territories of American Samoa, Guam, and Virgin Islands, the commonwealths of Northern Mariana Islands and Puerto Rico, and the District of Columbia, with all but Puerto Rico being active in nominating candidates for both presidential and territorial contests, while Puerto Rico's Democratic Party is organized only to nominate presidential candidates. The Democrats Abroad committee is organized by American voters who reside outside of U.S. territory to nominate presidential candidates. All such party committees are accorded recognition as state parties and are allowed to elect both members to the National Committee as well as delegates to the National Convention. The Democratic Congressional Campaign Committee (DCCC) assists party candidates in House races and is chaired by Representative Suzan DelBene of Washington. Similarly, the Democratic Senatorial Campaign Committee (DSCC), chaired by Senator Gary Peters of Michigan, raises funds for Senate races. The Democratic Legislative Campaign Committee (DLCC), chaired by Majority Leader of the New York State Senate Andrea Stewart-Cousins, is a smaller organization that focuses on state legislative races. The Democratic Governors Association (DGA) is an organization supporting the candidacies of Democratic gubernatorial nominees and incumbents. Likewise, the mayors of the largest cities and urban centers convene as the National Conference of Democratic Mayors. The DNC sponsors the College Democrats of America (CDA), a student-outreach organization with the goal of training and engaging a new generation of Democratic activists. Democrats Abroad is the organization for Americans living outside the United States. They work to advance the party's goals and encourage Americans living abroad to support the Democrats. The Young Democrats of America (YDA) and the High School Democrats of America (HSDA) are young adult and youth-led organizations respectively that attempt to draw in and mobilize young people for Democratic candidates but operate outside of the DNC. Political positions The Democratic Party is widely described in American sources as either a centrist, or a center-left political party. Analysts including Harold Meyerson and William Galston note that many of its mainstream policy positions and prominent factions would be classified as centrist by international standards, in particular those of Europe, and they are often seen as more comparable to liberal-centrist parties (for example parties associated with ALDE/Renew or the UK Liberal Democrats) than to traditional social-democratic parties; the party also contains distinct left-wing subgroups (such as the "Squad") alongside more centrist coalitions within its broad electoral coalition. Political scientists Robert C. Sinclair and R. Jeffrey Melton described the Democratic Party as "slightly to the right of the largest Canadian party, the center-left Liberal Party". The 21st century Democratic Party is unique and differs from other parties of similar profile in its ideological orientation, in part due to its heterogenous demographic composition. In particular, the Democratic Party's ideology derives from being supported by both racial minorities, particularly African Americans, as well as white voters with high educational attainment. Its voting demographics are heavily educationally and racially-polarized, but not income polarized. The Democratic Party is weakest among white voters without college degrees in the 21st century. Higher educational attainment is strongly correlated with higher income and wealth, and also strongly correlated with increased ideological support for the Democratic Party's positions among white voters. Ideologically, the Democratic Party is more diverse than the Republican Party, according to data collected by Gallup. This derives in part from unique regional characteristics of the United States, particularly the Southern United States. Racial polarization is extremely high in the Southern United States, with black Southerners almost entirely voting for the Democratic Party, and white Southerners almost entirely voting for the Republican Party. Also, white Southerners with college degrees are strongly Republican, unlike in most of the rest of the country. African Americans continue to have the lowest incomes of any racial group in the United States. The Democratic Party's contemporary liberalism has its origins in the Puritans of New England, with their emphasis on education and science dating back to the colonial era and the Scientific Revolution. This liberalism is older than the classical liberalism or social democracy of the 19th century. The Democratic Party's social positions derive from those of the New Left, that is cultural liberalism. These include feminism, LGBTQ rights, drug policy reforms, and environmentalism. The party's platform favors a generous welfare state and a greater measure of social and economic equality. On social issues, it advocates for the continued legality of abortion, the legalization of marijuana, and LGBTQ rights. The social safety net and strong labor unions have been at the heart of Democratic economic policy since the New Deal in the 1930s. The Democratic Party's economic policy positions, as measured by votes in Congress, tend to align with those of the middle class. Democrats support a progressive tax system, higher minimum wages, equal opportunity employment, Social Security, universal health care, public education, and subsidized housing. They also support infrastructure development and clean energy investments to achieve economic development and job creation. Since the 1990s, the party has at times supported centrist economic reforms that cut the size of government and reduced market regulations. The party has generally rejected both laissez-faire economics and market socialism, instead favoring Keynesian economics within a capitalist market-based system. However, the party is not social democratic and does not base its policies on organized labor. Since the 2020s, the party has also been distancing itself from predistributive economic policies such as job guarantees, minimum wage increases, protectionism and pro-union legislation. Democrats support a more progressive tax structure to provide more services and reduce economic inequality by making sure that the wealthiest Americans pay more in taxes. Democrats and Republicans traditionally take differing stances on eradicating poverty. Brady said "Our poverty level is the direct consequence of our weak social policies, which are a direct consequence of weak political actors". They oppose the cutting of social services, such as Social Security, Medicare, and Medicaid, believing it to be harmful to efficiency and social justice. Democrats believe the benefits of social services in monetary and non-monetary terms are a more productive labor force and cultured population and believe that the benefits of this are greater than any benefits that could be derived from lower taxes, especially on top earners, or cuts to social services. Furthermore, Democrats see social services as essential toward providing positive freedom, freedom derived from economic opportunity. The Democratic-led House of Representatives reinstated the PAYGO (pay-as-you-go) budget rule at the start of the 110th Congress. The Democratic Party favors raising the minimum wage. The Fair Minimum Wage Act of 2007 was an early component of the Democrats' agenda during the 110th Congress. In 2006, the Democrats supported six state-ballot initiatives to increase the minimum wage and all six initiatives passed. In 2017, Senate Democrats introduced the Raise the Wage Act which would raise the minimum wage to $15 an hour by 2024. In 2021, Democratic president Joe Biden proposed increasing the minimum wage to $15 by 2025. In many states controlled by Democrats, the state minimum wage has been increased to a rate above the federal minimum wage. Democrats call for "affordable and quality health care" and favor moving toward universal health care in a variety of forms to address rising healthcare costs. Progressive Democrats politicians favor a single-payer program or Medicare for All, while liberals prefer creating a public health insurance option. The Patient Protection and Affordable Care Act, signed into law by President Barack Obama on March 23, 2010, has been one of the most significant pushes for universal health care. As of December 2019, more than 20 million Americans have gained health insurance under the Affordable Care Act. Democrats favor improving public education by raising school standards and reforming the Head Start program. They also support universal preschool, expanding access to primary education, including through charter schools, and are generally opposed to school voucher programs. They call for addressing student loan debt and reforms to reduce college tuition. Other proposals have included tuition-free public universities and reform of standardized testing. Democrats have the long-term aim of having publicly funded college education with low tuition fees (like in much of Europe and Canada), which would be available to every eligible American student. Alternatively, they encourage expanding access to post-secondary education by increasing state funding for student financial aid such as Pell Grants and college tuition tax deductions. Democrats believe that the government should protect the environment and have a history of environmentalism. In more recent years, this stance has emphasized renewable energy generation as the basis for an improved economy, greater national security, and general environmental benefits. The Democratic Party is substantially more likely than the Republican Party to support environmental regulation and policies that are supportive of renewable energy. The Democratic Party also favors expansion of conservation lands and encourages open space and rail travel to relieve highway and airport congestion and improve air quality and the economy as it "believe[s] that communities, environmental interests, and the government should work together to protect resources while ensuring the vitality of local economies. Once Americans were led to believe they had to make a choice between the economy and the environment. They now know this is a false choice". The foremost environmental concern of the Democratic Party is climate change. Democrats, most notably former vice president Al Gore, have pressed for stern regulation of greenhouse gases. On October 15, 2007, Gore won the Nobel Peace Prize for his efforts to build greater knowledge about man-made climate change and lay the foundations for the measures needed to counteract it. Democrats have supported increased domestic renewable energy development, including wind and solar power farms, in an effort to reduce carbon pollution. The party's platform calls for an "all of the above" energy policy including clean energy, natural gas, and domestic oil, with the desire of becoming energy independent. The party has supported higher taxes on oil companies and increased regulations on coal power plants, favoring a policy of reducing long-term reliance on fossil fuels. Additionally, the party supports stricter fuel emissions standards to prevent air pollution. During his presidency, Joe Biden enacted the Inflation Reduction Act of 2022, which is the largest allocation of funds for addressing climate change in the history of the United States. Like the Republican Party, the Democratic Party has taken widely varying views on international trade throughout its history. The Democratic Party has usually been more supportive of free trade than the Republican Party. The Democrats dominated the Second Party System and set low tariffs designed to pay for the government but not protect industry. Their opponents, the Whigs, wanted high protective tariffs but usually were outvoted in Congress. Tariffs soon became a major political issue as the Whigs (1832–1852) and (after 1854) the Republicans wanted to protect their mostly Northern industries and constituents by voting for higher tariffs and the Southern Democrats, which had very little industry but imported many goods voted for lower tariffs. After the Second Party System ended in 1854, the Democrats lost control and the new Republican Party had its opportunity to raise rates. During the Third Party System, Democratic president Grover Cleveland made low tariffs the centerpiece of Democratic Party policies, arguing that high tariffs were an unnecessary and unfair tax on consumers. The South and West generally supported low tariffs, while the industrial North supported high tariffs. During the Fourth Party System, Democratic president Woodrow Wilson made a drastic lowering of tariff rates a major priority for his presidency. The 1913 Underwood Tariff cut rates, and the new revenues generated by the federal income tax made tariffs much less important in terms of economic impact and political rhetoric. During the Fifth Party System, the Reciprocal Tariff Act of 1934 was enacted during FDR's administration, marking a sharp departure from the era of protectionism in the United States. American duties on foreign products declined from an average of 46% in 1934 to 12% by 1962. After World War II, the U.S. promoted the General Agreement on Tariffs and Trade (GATT) established in 1947 during the Truman administration, to minimize tariffs liberalize trade among all capitalist countries. In the 1990s, the Clinton administration and several prominent Democrats pushed through several agreements such as the North American Free Trade Agreement (NAFTA). Barack Obama signed several free trade agreements during his presidency while Joe Biden did not sign any free trade agreements during his presidency and increased some tariffs on China. During Republican Donald Trump's two terms as president, the Democratic Party has been more in favor of free trade than the Republican Party. The Democratic Party remains supportive of the USMCA free trade agreement with Mexico and Canada. The modern Democratic Party emphasizes social equality and equal opportunity. Democrats support voting rights and minority rights, including LGBT rights. Democratic president Lyndon B. Johnson signed the Civil Rights Act of 1964, which outlawed racial segregation. Carmines and Stimson wrote "the Democratic Party appropriated racial liberalism and assumed federal responsibility for ending racial discrimination." Ideological social elements in the party include cultural liberalism, civil libertarianism, and feminism. Some Democratic social policies are immigration reform, electoral reform, and women's reproductive rights. The Democratic Party is a staunch supporter of equal opportunity for all Americans regardless of sex, age, race, ethnicity, sexual orientation, gender identity, religion, creed, or national origin. The Democratic Party has broad appeal across most socioeconomic and ethnic demographics, as seen in recent exit polls. Democrats also strongly support the Americans with Disabilities Act to prohibit discrimination against people based on physical or mental disability. As such, the Democrats pushed as well the ADA Amendments Act of 2008, a disability rights expansion that became law. Most Democrats support affirmative action to further equal opportunity. However, in 2020 57% voters in California voted to keep their state constitution's ban on affirmative action, despite Biden winning 63% of the vote in California in the same election. The party is very supportive of improving "voting rights" as well as election accuracy and accessibility. They support extensions of voting time, including making election day a holiday. They support reforming the electoral system to eliminate gerrymandering, abolishing the electoral college, as well as passing comprehensive campaign finance reform. The Democratic position on abortion has changed significantly over time. During the late 1960s and early 1970s, Republicans generally favored legalized abortion more than Democrats, although significant heterogeneity could be found within both parties. During this time, opposition to abortion tended to be concentrated within the political left in the United States. Liberal Protestants and Catholics (many of whom were Democratic voters) opposed abortion, while most conservative Protestants supported legal access to abortion services.[clarification needed] In its national platforms from 1992 to 2004, the Democratic Party has called for abortion to be "safe, legal and rare"—namely, keeping it legal by rejecting laws that allow governmental interference in abortion decisions and reducing the number of abortions by promoting both knowledge of reproduction and contraception and incentives for adoption. When Congress voted on the Partial-Birth Abortion Ban Act in 2003, congressional Democrats were split, with a minority (including former Senate majority leader Harry Reid) supporting the ban and the majority of Democrats opposing the legislation. According to the 2020 Democratic Party platform, "Democrats believe every woman should be able to access high-quality reproductive health care services, including safe and legal abortion." After Roe v. Wade (1973) was overturned in Dobbs v. Jackson Women's Health Organization (2022), Democratic-controlled states and ballot initiatives were able to ensure access to abortion. The number of abortions in the United States increased after Dobbs, due to the right to travel between states. Like the Republican Party, the Democratic Party has taken widely varying views on immigration throughout its history. Since the 1990s, the Democratic Party has been more supportive overall of immigration than the Republican Party. Many Democratic politicians have called for systematic reform of the immigration system such that residents that have come into the United States illegally have a pathway to legal citizenship. President Obama remarked in November 2013 that he felt it was "long past time to fix our broken immigration system," particularly to allow "incredibly bright young people" who came over as students to become full citizens. In 2013, Democrats in the Senate passed S. 744, which would reform immigration policy to allow citizenship for illegal immigrants in the United States. The law failed to pass in the House and was never re-introduced after the 113th Congress. Opposition to immigration has increased in the 2020s, with a majority of Democrats supporting increasing border security. In the 2024 presidential election, Trump increased his vote share in counties along the Mexico–United States border, including in majority-Hispanic counties. The Democratic position on LGBT rights has changed significantly over time. Before the 2000s, like the Republicans, the Democratic Party often took positions hostile to LGBT rights. As of the 2020s, both voters and elected representatives within the Democratic Party are overwhelmingly supportive of LGBT rights. Support for same-sex marriage has steadily increased among the general public, including voters in both major parties, since the start of the 21st century. An April 2009 ABC News/Washington Post public opinion poll put support among Democrats at 62%. A 2006 Pew Research Center poll of Democrats found that 55% supported gays adopting children with 40% opposed while 70% support gays in the military, with only 23% opposed. Gallup polling from May 2009 stated that 82% of Democrats support open enlistment. A 2023 Gallup public opinion poll found 84% of Democrats support same-sex marriage, compared to 71% support by the general public and 49% support by Republicans. The 2004 Democratic National Platform stated that marriage should be defined at the state level and it repudiated the Federal Marriage Amendment. John Kerry, the Democratic presidential nominee in 2004, did not support same-sex marriage in his campaign. While not stating support of same-sex marriage, the 2008 platform called for repeal of the Defense of Marriage Act, which banned federal recognition of same-sex marriage and removed the need for interstate recognition, supported antidiscrimination laws and the extension of hate crime laws to LGBT people and opposed "don't ask, don't tell". The 2012 platform included support for same-sex marriage and for the repeal of DOMA. On May 9, 2012, Barack Obama became the first sitting president to say he supports same-sex marriage. Previously, he had opposed restrictions on same-sex marriage such as the Defense of Marriage Act, which he promised to repeal, California's Prop 8, and a constitutional amendment to ban same-sex marriage (which he opposed, saying that "decisions about marriage should be left to the states as they always have been"), but also stated that he personally believed marriage to be between a man and a woman and that he favored civil unions that would "give same-sex couples equal legal rights and privileges as married couples". Earlier, when running for the Illinois Senate in 1996 he said, "I favor legalizing same-sex marriages, and would fight efforts to prohibit such marriages". Former presidents Bill Clinton and Jimmy Carter along with former Democratic presidential nominees Al Gore and Michael Dukakis support same-sex marriage. President Joe Biden has supported same-sex marriage since 2012, when he became the highest-ranking government official to support it. In 2022, Biden signed the Respect for Marriage Act; the law repealed the Defense of Marriage Act, which Biden had voted for during his Senate tenure. The 2016 Democratic Party platform declares, regarding the status of Puerto Rico: "We are committed to addressing the extraordinary challenges faced by our fellow citizens in Puerto Rico. Many stem from the fundamental question of Puerto Rico's political status. Democrats believe that the people of Puerto Rico should determine their ultimate political status from permanent options that do not conflict with the Constitution, laws, and policies of the United States. Democrats are committed to promoting economic opportunity and good-paying jobs for the hardworking people of Puerto Rico. We also believe that Puerto Ricans must be treated equally by Medicare, Medicaid, and other programs that benefit families. Puerto Ricans should be able to vote for the people who make their laws, just as they should be treated equally. All American citizens, no matter where they reside, should have the right to vote for the president of the United States. Finally, we believe that federal officials must respect Puerto Rico's local self-government as laws are implemented and Puerto Rico's budget and debt are restructured so that it can get on a path towards stability and prosperity". Also, it declares that regarding the status of the District of Columbia: "Restoring our democracy also means finally passing statehood for the District of Columbia, so that the American citizens who reside in the nation's capital have full and equal congressional rights as well as the right to have the laws and budget of their local government respected without Congressional interference." With a stated goal of reducing crime and homicide, the Democratic Party has introduced various gun control measures, most notably the Gun Control Act of 1968, the Brady Bill of 1993 and the Violent Crime Control and Law Enforcement Act (1994). In its national platform for 2008, the only statement explicitly favoring gun control was a plan calling for renewal of the 1994 Assault Weapons Ban. In 2022, Democratic president Joe Biden signed the Bipartisan Safer Communities Act, which, among other things, expanded background checks and provided incentives for states to pass red flag laws. The Democratic Party does not oppose gun ownership. According to a 2023 Pew Research Center poll, 20% of Democrats owned firearms, compared to 32% of the general public and 45% of Republicans. The Democratic position on capital punishment has shifted multiple times over the decades. In 1968, Attorney General Ramsey Clark, representing the Johnson Administration, asked Congress to abolish the federal death penalty. In 1972, the Democratic Party platform called for the abolition of capital punishment. In 1988, Democratic Presidential nominee Michael Dukakis's statement in the 1988 United States presidential debates that he would oppose the death penalty even if his wife were raped and murdered was seen by many viewers as callous and emotionless, and was widely viewed as having contributed to his loss to George H. W. Bush in the general election. During his presidential campaign, Bill Clinton sought to distance himself from his party's left flank through his strong support for the death penalty, including by personally supervising the execution of Ricky Ray Rector, a lobotomized African-American man convicted of killing a police officer. During Clinton's presidency, Democrats led the expansion of the federal death penalty. These efforts were manifested in the 1994 Violent Crime Control and Law Enforcement Act, which expanded the federal death penalty to around 60 offenses, and the Antiterrorism and Effective Death Penalty Act of 1996, which heavily limited appeals in death penalty cases. The Democratic Party platforms of 1996 and 2000 supported capital punishment outright, while the Democratic Party platforms of 2008 and 2012 warned against arbitrary application and the execution of innocents. In June 2016, the Democratic Platform Drafting Committee unanimously adopted an amendment to abolish the death penalty. The 2020 Democratic Party platform reiterated the Party's opposition to capital punishment. The 2024 platform is the first since the 2004 platform that does not mention the death penalty, and the first since 2016 not to call for abolition. However, on December 23, 2024, President Biden commuted the sentences of 37 out of 40 federal death row inmates to life in prison without parole. Many Democrats are opposed to the use of torture against individuals apprehended and held prisoner by the United States military, and hold that categorizing such prisoners as unlawful combatants does not release the United States from its obligations under the Geneva Conventions. Democrats contend that torture is inhumane, damages the United States' moral standing in the world, and produces questionable results. Democrats are largely against waterboarding. Torture became a divisive issue in the party after Barack Obama was elected president. The Democratic Party believes that individuals should have a right to privacy. For example, many Democrats have opposed the NSA warrantless surveillance of American citizens. Some Democratic officeholders have championed consumer-protection laws that limit the sharing of consumer data among corporations. Democrats have opposed sodomy laws since the 1972 platform, which stated that "Americans should be free to make their own choice of life-styles and private habits without being subject to discrimination or prosecution", and believe that government should not regulate consensual noncommercial sexual conduct among adults as a matter of personal privacy. In foreign policy, the party supports liberal internationalism as well as tough stances against China and Russia. The foreign policy of the voters of the two major parties has largely overlapped since the 1990s. A Gallup poll in early 2013 showed broad agreement on the top issues, albeit with some divergence regarding human rights and international cooperation through agencies such as the United Nations. In June 2014, the Quinnipiac Poll asked Americans which foreign policy they preferred: A) The United States is doing too much in other countries around the world, and it is time to do less around the world and focus more on our own problems here at home. B) The United States must continue to push forward to promote democracy and freedom in other countries worldwide because these efforts make our own country more secure. Democrats chose A over B by 65% to 32%; Republicans chose A over B by 56% to 39%; and independents chose A over B by 67% to 29%. The Democratic Party has been critical of Iran's nuclear program and supported economic sanctions against the Iranian government. In 2013, the Democratic-led administration worked to reach a diplomatic agreement with the government of Iran to halt the Iranian nuclear program in exchange for international economic sanction relief. As of 2014[update], negotiations had been successful and the party called for more cooperation with Iran in the future. In 2015, the Obama administration agreed to the Joint Comprehensive Plan of Action, which provides sanction relief in exchange for international oversight of the Iranian nuclear program. In February 2019, the Democratic National Committee passed a resolution calling on the United States to re-enter the JCPOA, from which President Trump withdrew the United States in 2018. Democrats in the House of Representatives and in the Senate near-unanimously voted for the Authorization for Use of Military Force Against Terrorists against "those responsible for the recent attacks launched against the United States" in Afghanistan in 2001, supporting the NATO coalition invasion of the nation. Most elected Democrats continued to support the Afghanistan conflict during George W. Bush's presidency. During the 2008 Presidential Election, then-candidate Barack Obama called for a "surge" of troops into Afghanistan. After winning the presidency, Obama followed through, sending additional troops to Afghanistan. Troop levels were 94,000 in December 2011 and kept falling, with a target of 68,000 by fall 2012. Support for the war among the American people diminished over time. Many Democrats changed their opinion over the course of the war, coming to oppose the continuation of the conflict. In July 2008, Gallup found that 41% of Democrats called the invasion a "mistake" while a 55% majority disagreed. A CNN survey in August 2009 stated that a majority of Democrats opposed the war. CNN polling director Keating Holland said: "Nearly two thirds of Republicans support the war in Afghanistan. Three quarters of Democrats oppose the war". During the 2020 Presidential Election, then-candidate Joe Biden promised to "end the forever wars in Afghanistan and the Middle East." Biden went on to win the election, and, in April 2021, he announced that he would withdraw all U.S. troops from Afghanistan by September 11 of that year. The last troops left in August, bringing America's 20-year-long military campaign in the country to a close. According to a 2023 AP-NORC poll, a majority of Democrats believed that the War in Afghanistan was not worth it. Democrats have historically been stronger supporters of Israel than Republicans. During the 1940s, the party advocated for the cause of an independent Jewish state over the objections of many conservatives in the Old Right, who strongly opposed it. In 1948, Democratic President Harry Truman became the first world leader to recognize an independent state of Israel. The 2020 Democratic Party platform acknowledges a "commitment to Israel's security, its qualitative military edge, its right to defend itself, and the 2016 Memorandum of Understanding is ironclad" and that "we oppose any effort to unfairly single out and delegitimize Israel, including at the United Nations or through the Boycott, Divestment, and Sanctions Movement". During the Gaza war, the party requested a large-scale military aid package to Israel. Biden also announced military support for Israel, condemned the actions of Hamas and other Palestinian militants as terrorism, and ordered the U.S. military to build a port to facilitate the arrival of humanitarian aid to Palestinian civilians in Gaza. However, parts of the Democratic base also became more skeptical of the Israel government. The number of Democrats (and Americans in general) who oppose sending arms to Israel has grown as Israel's war in Gaza has continued. Experts said support for Israel could have hurt Democrats in several key states, including Michigan and Pennsylvania, in the 2024 presidential election. Late in 2024, twenty Democratic lawmakers requested support for U.S. legislation that would ban arms trade with countries that hinder humanitarian aid. According to a Pew Research Center poll conducted in March 2025, 69% of Democrats have an unfavorable view of Israel, compared to 53% in 2022, before the Gaza war. By July 2025, about half of the Democratic Senate delegation was opposed to sending arms to Israel. The 2022 Russian invasion of Ukraine was politically and economically opposed by the Biden Administration, who promptly began an increased arming of Ukraine. In October 2023, the Biden administration requested an additional $61.4 billion in aid for Ukraine for the year ahead, but delays in the passage of further aid by the Republican-controlled House of Representatives inhibited progress, with the additional $61 billion in aid to Ukraine added in April 2024. Demographics In the 2024 presidential election, the party performed best among voters who were upper income, lived in urban areas, college graduates, identified as Atheist, Agnostic, or Jewish; African Americans, LGBTQ+, and unmarried. In particular, Kamala Harris' two strongest demographic groups in the 2024 presidential election were African Americans (86–13%) and LGBT voters (86–12%). Support for the civil rights movement in the 1960s by Democratic presidents John F. Kennedy and Lyndon B. Johnson helped increase the Democrats' support within the African-American community. African Americans have consistently voted between 85% and 95% Democratic since the 1960s, making African Americans one of the largest of the party's constituencies. According to the Pew Research Center, 78.4% of Democrats in the 116th United States Congress were Christian. However, the vast majority of white evangelical and Latter-day Saint Christians favor the Republican Party. The party also receives strong support from non-religious voters. Younger Americans have tended to vote mainly for Democratic candidates in recent years, particularly those under the age of 30. In the 2024 presidential election, Harris won voters aged 18–29 (54–43%) and 30–39 (51–45%), tied among those aged 40–49 (49–49%), lost those aged 50–64 (43–56%), and narrowly lost those aged 65 and older (49–50%). The median voter is in their 50s. One of the main reasons that 18– to 29-year-old voters strongly support Democrats is that they are much less likely to be married. Harris tied Donald Trump with white voters aged 18 to 29 (49-49%) and won white women aged 18 to 29 (54-44%). Referring to the state map of the white vote, Kamala Harris in 2024 won every state in which Joe Biden had won the white vote in 2020. Republican Donald Trump won every state in which Joe Biden had lost the white vote except for Virginia. Virginia is 20% African American, and its white voters are much less Republican than those of other Southern states because Northern Virginia, which is in the Washington metropolitan area, is a Democratic stronghold. Referring to the county map of the white vote, Democrats carry white voters in most of New England and the West Coast. Democrats also do well in regions with high Nordic and Scandinavian ancestry. For example, this keeps white voters in Minnesota and Wisconsin much less Republican than in other Midwestern states. Democrats are also relatively competitive among or win white voters in parts of the Northeast, Midwest, and Southwest. Democrats do particularly poorly among white Southerners, as racial polarization is extremely high in the Southern United States. In the 2024 presidential election, African Americans supported Kamala Harris 86-13%, while white Southerners supported Donald Trump 67-32%. Even in many urban counties in the Southern United States, Democrats do not win a majority of white voters. Trump won both white Southerners with college degrees (57-41%) and without college degrees (75-24%). New Mexico is almost half Hispanic (49.3%), and is the most heavily Hispanic state in the country. Of the 19 states and the District of Columbia won by Kamala Harris in the 2024 presidential election, all except New Mexico had above-average educational attainment. New Mexico also had the lowest population density and the highest poverty rate of any state carried by Harris. Since 1980, a "gender gap" has seen stronger support for the Democratic Party among women than among men. Unmarried and divorced women are more likely to vote for Democrats. Although women supported Obama over Mitt Romney by a margin of 55–44% in 2012, Romney prevailed amongst married women, 53–46%. Obama won unmarried women 67–31%. According to a December 2019 study, "White women are the only group of female voters who support Republican Party candidates for president. They have done so by a majority in all but 2 of the last 18 elections". In the 2024 presidential election, LGBTQ+ voters supported Harris 86-12%, on par with African Americans. Harris lost married men (38–60%) and married women (47–52%), tied among unmarried men (48-48%), and won unmarried women (61-38%). White women with college degrees do support Democrats somewhat strongly, with Harris winning them 58-41%, likely the best modern performance with this demographic. They were one of the few demographic groups that shifted towards Democrats from 2020 to 2024. Total fertility rate is strongly negatively correlated with support for the Democratic Party. Specifically, as total fertility increased in states, Democratic vote share decreased. Geographically, the party is strongest in the Northeastern United States, parts of the Great Lakes region and Southwestern United States, and the West Coast. The party is also very strong in major cities, regardless of region. The Democratic Party gradually lost its power in the Southern United States since 1964. Although Richard Nixon carried 49 states in 1972, including every Southern state, the Republican Party remained quite weak at the local and state levels across the entire South for decades. Republicans first won a majority of U.S. House seats in the South in the 1994 "Republican Revolution", and only began to dominate the South after the 2010 elections. Since the 2010s, white Southerners have been the Republican Party's strongest racial demographic, in some Deep South states voting nearly as Republican as African Americans vote Democratic. This is partially attributable to religiosity, with white evangelical Christians in the Bible Belt, which covers most of the South, being the Republican Party's strongest religious demographic. The Democratic Party is particularly strong in the West Coast and the Northeastern United States. In particular, the Democratic Party receives its strongest support from white voters in these two regions. This is attributable to the two regions having the highest educational attainment in the country and being part of the "Unchurched Belt", with the lowest rates of religiosity in the country. The Democratic Party's support in the Midwest and Southwest are more mixed, with varying levels of support from white voters in both regions. In the Midwest, the Democratic Party receives varying levels of support, with some states safely Democratic, some swing states, and some safely Republican. In the Southwest, the Democratic Party also relies on Hispanic voters. The Democratic Party is particularly weak in the Great Plains and some Mountain states. In particular, the states of Idaho, Utah, Wyoming, North Dakota, South Dakota, Nebraska,[d] Kansas, and Oklahoma have not voted for the Democratic Party since the 1964 presidential election. Montana has not voted for the Democratic Party since the 1992 presidential election. White voters have considerable regional variations. In the 2024 presidential election, Kamala Harris lost Southern white voters 32–67% and Midwestern white voters 40–59%. Harris tied among white voters in the Northeastern United States 49-49%, and won white voters in the Western United States 52-45%. Harris lost white voters in the country as a whole to Trump 42–57%. The Democratic Party's support is strongly positively correlated with increased population density, consistent with the urban-rural divide observed globally. Notably, in the 2024 presidential election, the swings against Kamala Harris were inversely correlated to population density, shrinking the urban-rural divide slightly. Harris still received higher support as population density increased. But relative to 2020, urban areas had the largest swings against Harris, suburban areas had lesser swings against Harris, and rural areas had the smallest swings against Harris. Specifically, Harris won voters in urban areas (60-38%), narrowly lost voters in suburban areas (47–51%), and lost voters in rural areas (34–64%). The urban-rural divide holds after controlling for race. The only state of the ten least densely populated that Harris won was New Mexico, which is almost half Hispanic (49.3%). In the Southern United States, racial polarization is often stronger than the urban-rural divide. In particular, Democrats lose white voters in many Southern urban areas, while doing extremely well in rural majority-black counties. Until the 2016 victory of Republican Donald Trump, lower income was strongly correlated to voting for the Democratic Party among the general electorate. However, in all three of Trump's elections (2016, 2020, and 2024), the previous correlation between lower incomes and voting for the Democratic Party was eliminated. Instead, among white voters, higher educational attainment was strongly correlated with higher support for the Democratic Party. In the 2024 presidential election, Democratic nominee Kamala Harris did better among higher-income voters than lower-income voters for the first time in modern American political history. High-income voters, including high-income white voters and white men with college degrees, are no longer Republican demographic strongholds and voted in line with the national popular vote in 2024. Harris only narrowly lost white voters making $100,000 to $199,999 (49–50%), over $200,000 (48–51%), and white men with college degrees (48–50%), all on par with Harris losing the popular vote 48–50%. White men with college degrees are the highest-income demographic group. Nate Silver argues that the urban-rural divide, educational polarization, and racial polarization have rendered income irrelevant to voters in the Trump era. African Americans continue to be the lowest-income demographic in the United States. According to 2024 exit polls, 45% of black voters made less than $50,000 a year, compared to 27% of the electorate. Harris still won most of the lowest-income counties, which are mainly majority-black counties in the Southern Black Belt. Higher educational attainment is strongly correlated to higher income and wealth, and the 2021–2023 inflation surge resulted in lower-income voters losing purchasing power while higher-income voters gained from asset prices increasing due to inflation, including stocks and real estate. After controlling for education, there was little difference in white-voter support for Harris by annual income. Note than 54% of white voters did not have college degrees while 46% of white voters did have college degrees. According to a 2022 Gallup poll, roughly equal proportions of Democrats (64-35%) and Republicans (66-34%) had money invested in the stock market. In the 2020 presidential election, college-educated white voters in all 50 states voted more Democratic than non-college white voters, as displayed in the two maps. As of 2022, over 90% of American adults over the age of 25 have completed high school. However, only 35% have a Bachelor's degree and 17% have a graduate degree. Higher educational attainment among white voters corresponds to increased ideological support for the Democratic Party. Educational attainment is not the only factor that affects ideology among white voters. After controlling for education, there remain huge variations by state and region. Educational polarization is weaker than racial polarization in the South. Educational polarization has benefitted Democrats in some well educated Southern states because it has not changed African-American support for Democrats. Democrats are competitive in Georgia and North Carolina because there is much more room for Democrats to grow among white Southerners with college degrees than ground for Democrats to fall among white Southerners without college degrees. This also keeps Virginia reliably Democratic, despite Republicans obtaining a majority of the white vote. In the 2024 presidential election, among white voters, educational attainment was strongly positively correlated with support for Kamala Harris. Specifically, as educational attainment increased among white voters, so did support for Harris. It was not only about having a college degree or not; support for Harris continuously increased as educational attainment increased. Educational polarization is stronger than gender and marital status among white voters, but weaker than racial polarization in the South. According to a November 2024 Gallup poll, unionization rates were positively correlated with increased educational attainment and higher income. In particular, 15% of those with graduate degrees, 8% with bachelor's degrees, 9% with some college, and 5% with high school or less were unionized. Also, 11% of those with household incomes of $100,000 or more, 7% of those with $40,000 to $99,999, and 3% with less than $40,000 were unionized. Also, only 6% of those in the private sector were unionized, compared to 28% of government employees. Many Democrats without college degrees differ from liberals in their more socially moderate views and are more likely to belong to an ethnic minority. White voters with college degrees are more likely to live in urban areas. Factions Upon foundation, the Democratic Party supported agrarianism and the Jacksonian democracy movement of President Andrew Jackson, representing farmers and rural interests and traditional Jeffersonian democrats. Since the 1890s, especially in Northern states, the party began to favor more liberal positions (the term "liberal" in this sense describes modern liberalism, rather than classical liberalism or economic liberalism). Historically, the party has represented farmers, laborers, and religious and ethnic minorities as it has opposed unregulated business and finance and favored progressive income taxes. In the 1930s, the party began advocating social programs targeted at the poor. Before the New Deal, the party had a fiscally conservative, pro-business wing, typified by Grover Cleveland and Al Smith. The party was dominant in the Southern United States until President Lyndon B. Johnson signed the Civil Rights Act of 1964. In foreign policy, internationalism (including interventionism) was a dominant theme from 1913 to the mid-1960s. The major influences for liberalism were labor unions (which peaked in the 1936–1952 era) and African Americans. Environmentalism has been a major component since the 1970s. Even after the New Deal, until the 2010s, the party still had a fiscally conservative faction, such as John Nance Garner and Howard W. Smith. The party's Southern conservative wing began shrinking after President Lyndon B. Johnson supported the Civil Rights Act of 1964, and largely died out in the 2010s, as the Republican Party built up its Southern base. The party still receives support from African Americans and urban areas in the Southern United States. The 21st century Democratic Party is predominantly a coalition of centrists, liberals, and progressives, with significant overlap between the three groups. In 2019, the Pew Research Center found that among Democratic and Democratic-leaning registered voters, 47% identify as liberal or very liberal, 38% identify as moderate, and 14% identify as conservative or very conservative. Political scientists characterize the Democratic Party as less ideologically cohesive than the Republican Party due to the broader diversity of coalitions that compose the Democratic Party. The party has lost significant ground with voters without college degrees in the 21st century, in line with trends across the developed world. The realignment unfolded gradually, first with white voters in the South and Midwest, and later with voters as a whole without college degrees, except for African Americans. Democrats have consistently won voters with graduate degrees since the 1990s, including a majority of white voters with graduate degrees. Since the 2010s, the party's main demographic gains have been among white voters with college degrees, which until 2016 had been a Republican-leaning group. The party still receives extremely strong support from African Americans, but has lost ground among other racial minorities, including Hispanics, Native Americans, and Asian Americans. Modern liberals are a large portion of the Democratic base. According to 2018 exit polls, liberals constituted 27% of the electorate, and 91% of American liberals favored the candidate of the Democratic Party. White-collar college-educated professionals were mostly Republican until the 1950s, but they had become a vital component of the Democratic Party by the early 2000s. According to a 2025 Gallup poll, 37% of American voters identify as "conservative" or "very conservative", 34% as "moderate", and 25% as "liberal" or "very liberal". For Democrats, 9% identified as conservative, 34% as moderate, and 55% as liberal. A large majority of liberals favor moving toward universal health care. A majority also favor diplomacy over military action; stem cell research, same-sex marriage, stricter gun control, environmental protection laws, as well as the preservation of abortion rights. Immigration and cultural diversity are deemed positive as liberals favor cultural pluralism, a system in which immigrants retain their native culture in addition to adopting their new culture. Most liberals oppose increased military spending and the mixing of church and state. As of 2020, the three most significant labor groupings in the Democratic coalition were the AFL–CIO and Change to Win labor federations as well as the National Education Association, a large, unaffiliated teachers' union. Important issues for labor unions include supporting unionized manufacturing jobs, raising the minimum wage, and promoting broad social programs such as Social Security and Medicare. This ideological group is strongly correlated with high educational attainment. According to the Pew Research Center, 49% were college graduates, the highest figure of any typographical group. It was also the fastest-growing typological group since the late 1990s to the present. Liberals include most of the academia and large portions of the professional class. Moderate Democrats, or New Democrats, are an ideologically centrist faction within the Democratic Party that emerged after the victory of Republican George H. W. Bush in the 1988 presidential election. Running as a New Democrat, Bill Clinton won the 1992 and 1996 presidential elections. They are an economically liberal and "Third Way" faction that dominated the party for around 20 years, until the beginning of Obama's presidency. They are represented by organizations such as the New Democrat Network and the New Democrat Coalition. The Blue Dog Coalition was formed during the 104th Congress to give members from the Democratic Party representing conservative-leaning districts a unified voice after the Democrats' loss of Congress in the 1994 Republican Revolution. However, in the late 2010s and early 2020s, the coalition's focus shifted towards ideological centrism. One of the most influential centrist groups was the Democratic Leadership Council (DLC), a nonprofit organization that advocated centrist positions for the party. The DLC disbanded in 2011. Some Democratic elected officials have self-declared as being centrists, including former president Bill Clinton, former vice president Al Gore, Senator Mark Warner, Kansas governor Laura Kelly, former senator Jim Webb, and President Joe Biden. The New Democrat Network supports socially liberal and fiscally moderate Democratic politicians and is associated with the congressional New Democrat Coalition in the House. Annie Kuster is the chair of the coalition, and former senator and president Barack Obama was self-described as a New Democrat. In the 21st century, some former Republican moderates have switched to the Democratic Party. Progressives are the most left-leaning faction in the party and support strong business regulations, social programs, and workers' rights. In 2014, progressive Senator Elizabeth Warren set out "Eleven Commandments of Progressivism": tougher regulation on corporations; affordable education; scientific investment and environmentalism; net neutrality; increased wages; equal pay for women; collective bargaining rights; defending social programs; same-sex marriage; immigration reform; and unabridged access to reproductive healthcare. The Congressional Progressive Caucus (CPC) is a caucus of progressive Democrats chaired by Greg Casar of Texas. Its members have included representatives Dennis Kucinich of Ohio, John Conyers of Michigan, Jim McDermott of Washington, Barbara Lee of California, and Senator Paul Wellstone of Minnesota. Senators Tammy Baldwin of Wisconsin, Mazie Hirono of Hawaii, and Ed Markey of Massachusetts were members of the caucus when in the House of Representatives. As of 2024, the CPC is the second-largest ideological caucus in the House Democratic Caucus by voting members, behind the New Democrat Coalition. Senator Bernie Sanders has often been viewed as a leader of the progressive movement; he ran presidential campaigns in 2016 and 2020. Other members of the progressive faction include the Squad. Democratic presidents As of 2025[update], there have been a total of 16 Democratic presidents. Recent electoral history year seats won seats won year See also Notes References Further reading External links |
======================================== |
[SOURCE: https://techcrunch.com/2025/01/28/chatgpt-everything-to-know-about-the-ai-chatbot/] | [TOKENS: 12245] |
Save up to $680 on your pass with Super Early Bird rates. REGISTER NOW. Save up to $680 on your Disrupt 2026 pass. Ends February 27. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us ChatGPT: A 2025 timeline of updates to OpenAI’s text-generating chatbot ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to supercharge productivity through writing essays and code with short text prompts has evolved into a behemoth with 300 million weekly active users. In 2025, OpenAI has battled the perception that it was ceding ground in the AI race to Chinese rivals like DeepSeek, all while the company has tried to shore up its relationship with Washington, pursued ambitious data center projects, and laid the groundwork for one of the largest funding rounds in history. Most recently though, headlines around OpenAI have focused on its competition gaining ground, with CEO Sam Altman’s “code red” internal memo shifting company focus toward its flagship chatbot. And going further into the archives for context, this year came after a packed 2024, from OpenAI’s partnership with Apple for its generative AI offering, Apple Intelligence, the release of GPT-4o with voice capabilities, and the highly-anticipated launch of its text-to-video model Sora. OpenAI also faced its share of internal drama, including the notable exits of high-level execs like co-founder and longtime chief scientist Ilya Sutskever and CTO Mira Murati. OpenAI has also been hit with lawsuits from Alden Global Capital-owned newspapers alleging copyright infringement, as well as an injunction from Elon Musk to halt OpenAI’s transition to a for-profit. Below, you’ll find a timeline of ChatGPT product updates and releases, starting with the latest, which we’ve been updating throughout the year. If you have any other questions, check out our ChatGPT FAQ here. To see a list of 2024-specific updates, go here. Timeline of the most recent ChatGPT updates OpenAI has added new controls in ChatGPT that let users adjust the chatbot’s warmth, enthusiasm, emoji use, and formatting style. This builds on existing tone options, addressing past complaints about the AI being too sycophantic or cold. OpenAI has updated its guidelines for users under 18 and released new resources for parents to promote safer interactions with ChatGPT. Experts caution that while the rules are clearer on paper, it’s unclear how consistently the AI follows them in practice. ChatGPT has surpassed $3 billion in global consumer spending on mobile since its 2023 launch. This makes it one of the fastest-growing apps in terms of revenue, outpacing rivals like TikTok, Disney+, and HBO Max. OpenAI has released GPT Image 1.5, a new version of ChatGPT Images that’s faster and better at following instructions and making precise edits. The update comes as OpenAI races to keep up with Google’s Gemini and Nano Banana Pro in AI image generation. Disney is putting $1 billion into OpenAI as a way to dive into AI, letting users on Sora create videos using over 200 Disney characters, at least for the first year exclusively. Bob Iger says the deal gives Disney a chance to explore AI while protecting its characters and figuring out how to use this technology in the future. OpenAI says enterprise use of its AI tools has surged, with ChatGPT message volume up 8x since late 2024 and workers saving up to an hour a day. The data underscores OpenAI’s push to win enterprise customers as competition heats up from Google, Anthropic, and open-model rivals, a recurring theme you’ll see in recent updates. OpenAI rolled out its latest model, GPT-5.2, as competition with Google continued to heat up. The model will roll out to paid ChatGPT users and developers in three versions — Instant, Thinking, and Pro — tailored for everything from everyday tasks to complex reasoning and high-accuracy work. Disney has signed a three-year deal with OpenAI, investing $1 billion and bringing characters from Disney, Marvel, Pixar, and Star Wars to OpenAI’s Sora video generator. The partnership will let users create AI videos using hundreds of Disney-owned characters, costumes, and props. On the same day, Disney notably launched a lawsuit against Google alleging “massive” copyright infringement occurring in its AI models. OpenAI CEO Sam Altman has put OpenAI on “code red,” telling staff the company will prioritize improving ChatGPT as pressure mounts from Google and other AI competitors, according to The Information. As part of the move, OpenAI plans to put some other initiatives, including advertising, on the back burner. OpenAI launched a new AI shopping feature in ChatGPT ahead of the peak holiday shopping window to help users research potential purchases. OpenAI’s new ChatGPT shopping feature lets users get product recommendations by describing features or sharing photos to find similar items at different prices. And they’re not alone, with both Perplexity and a slew of competitor startups playing in the commerce space. After Adam Raine’s family sued OpenAI in August, claiming their teen used ChatGPT as a “suicide coach,” OpenAI said in a new court filing that it isn’t liable, arguing the chatbot was misused. This marks OpenAI’s first response to a case that has raised wider concerns about chatbots and mental health risks. OpenAI is bringing ChatGPT’s voice mode straight into the main chat, so you no longer have to jump to a separate screen. Now you can talk to ChatGPT and see everything it says and shows right in the same window. OpenAI can’t use “cameo” for Sora features for now, following a trademark lawsuit from the video app Cameo, with the ban lasting until December 22. ChatGPT is now getting group chats for everyone — Free, Go, Plus, and Pro users alike — after testing it in a few regions last week. You can now team up with friends, family, or co-workers in one chat with ChatGPT to plan, create, or make decisions together. OpenAI has released GPT‑5.1, upgrading the GPT‑5 series with two models: Instant, which it says will be warmer and more conversational with users, and Thinking, which offers faster, simple-task handling and more persistent complex reasoning. The update also introduces improved controls for customizing ChatGPT’s tone to better match user preferences. A Munich court ruled that ChatGPT violated German copyright law by reproducing lyrics from nine protected songs, including Herbert Grönemeyer’s hits, rejecting OpenAI’s argument that the AI only reflected learned patterns. The decision could set a European precedent on AI use of copyrighted material, amid growing global legal challenges over AI and music rights. OpenAI is exploring the consumer health sector, developing AI tools like personal health assistants and data aggregators, according to a report by Business Insider. With new healthcare-focused hires, it aims to simplify access to fragmented medical data — an area where Big Tech has struggled — through its conversational AI approach. In November 2025, seven families sued OpenAI, alleging that GPT-4o was released prematurely without safeguards, contributing to suicides and severe psychiatric harm. One case involved 23-year-old Zane Shamblin, who told ChatGPT of his suicide plans, and the AI encouraged him. The lawsuits focus on GPT-4o’s tendency to be overly agreeable, despite users expressing dangerous intentions. On November 5, OpenAI announced that over 1 million businesses globally now use its products, making it the fastest-growing business platform in history. Companies across industries like finance, healthcare, and retail, including Amgen, Booking.com, Cisco, Morgan Stanley, T-Mobile, Target, and Thermo Fisher Scientific, are using ChatGPT and OpenAI’s developer tools to enhance operations and customer experiences. OpenAI revealed that a small but significant portion of ChatGPT users, more than a million weekly, discuss mental health struggles, including suicidal thoughts, psychosis, or mania, with the AI. The company says it has improved ChatGPT’s responses by consulting more than 170 mental health experts to handle such conversations more appropriately than earlier versions. OpenAI is developing a new tool that generates music from text and audio prompts, potentially for enhancing videos or adding instrumentation, and is training it using annotated scores from Juilliard students, according to The Information. The launch date and whether it will be standalone or integrated with ChatGPT and Sora remain unclear. OpenAI’s new “company knowledge” update for ChatGPT lets Business, Enterprise, and Education users search workplace data across tools like Slack, Google Drive, and GitHub using GPT‑5, per a report by The Verge. The feature acts as a conversational search engine, providing more comprehensive and accurate answers by scouring multiple sources simultaneously. OpenAI has launched its AI browser, ChatGPT Atlas, starting on Mac, letting users get answers from ChatGPT instead of traditional search results. Unlike other AI browsers, Atlas is open to all users and will soon come to Windows, iOS, and Android, as OpenAI aims to make ChatGPT the go-to tool for browsing the web. A new Apptopia analysis suggests ChatGPT’s mobile app growth may be leveling off, with global download growth slowing since April. While daily installs remain in the millions, October is tracking an 8.1% month-over-month decline in new downloads. OpenAI is partnering with Walmart to allow users to browse products, plan meals, and make purchases through ChatGPT, with support for third-party sellers expected later this fall. The partnership is part of OpenAI’s broader effort to develop AI-driven e-commerce tools, including collaborations with Etsy and Shopify. OpenAI is expanding its affordable ChatGPT Go plan, priced under $5, to 16 new countries across Asia, including Afghanistan, Bangladesh, Bhutan, Brunei Darussalam, Cambodia, Laos, Malaysia, Maldives, Thailand, Vietnam, and Pakistan. In some of these countries, users can pay in local currencies, while in others, payments are required in USD, with final costs varying due to local taxes. ChatGPT now has 800 million weekly active users, reflecting rapid growth across consumers, developers, enterprises, and governments, Sam Altman said. This milestone comes as OpenAI accelerates efforts to expand its AI infrastructure and secure more chips to support rising demand. OpenAI now allows developers to build interactive apps directly inside ChatGPT, with early partners like Booking.com, Expedia, Spotify, Figma, Coursera, Zillow, and Canva already onboard. The ChatGPT maker is also rolling out a preview of its Apps SDK, a developer toolkit for creating these chat-based experiences. OpenAI is reportedly adding parental controls to ChatGPT on web and mobile, letting parents and teens link accounts to enable safeguards like limiting sensitive content, setting quiet hours, and disabling features such as voice mode or image generation. The move comes amid growing regulatory scrutiny and a lawsuit over the chatbot’s alleged role in a teen’s suicide. OpenAI unveiled Pulse, a new ChatGPT feature that delivers personalized morning briefings overnight, encouraging users to start their day with the app. The tool reflects a shift toward making ChatGPT more proactive and asynchronous, positioning it as a true assistant rather than just a chatbot. OpenAI’s new Applications CEO, Fidji Simo, called Pulse the first step toward bringing high-level personal support to everyone, starting with Pro users. OpenAI launched Instant Checkout in ChatGPT, letting U.S. users purchase products directly from Etsy and, soon, over a million Shopify merchants without leaving the conversation. Shoppers can browse items, read reviews, and complete purchases with a single tap using Apple Pay, Google Pay, Stripe, or a credit card. The update marks a step toward reshaping online shopping by merging product discovery, recommendations, and payments in one place. OpenAI rolled out its budget-friendly ChatGPT Go plan in Indonesia for Rp 75,000 ($4.50) per month, following its initial launch in India. The mid-tier plan, which offers higher usage limits, image generation, file uploads, and better memory compared to the free version, enters the market in direct competition with Google’s new AI Plus plan in Indonesia. CEO Sam Altman announced new policies for under-18 users of ChatGPT, tightening safeguards around sensitive conversations. The company says it will block flirtatious exchanges with minors and add stronger protections around discussions of suicide, even escalating severe cases to parents or authorities. The move comes as OpenAI faces a wrongful death lawsuit tied to alleged chatbot interactions, underscoring rising concerns about the mental health risks of AI companions. OpenAI rolled out GPT-5-Codex, a new version of its AI coding agent that can spend anywhere from a few seconds to seven hours tackling a task, depending on complexity. The company says this dynamic approach helps the model outperform GPT-5 on key coding benchmarks, including bug fixes and large-scale refactoring. The update comes as OpenAI looks to keep Codex competitive in a fast-growing market that now includes rivals like Claude Code, Cursor, and GitHub Copilot. OpenAI is shaking up its Model Behavior team, the small but influential group that helps shape how its AI interacts with people. The roughly 14-person team is being folded into the larger Post Training group, now reporting to lead researcher Max Schwarzer. Meanwhile, founding leader Joanne Jang is spinning up a new unit called OAI Labs, focused on prototyping fresh ways for people to collaborate with AI. OpenAI, facing a lawsuit from the parents of a 16-year-old who died by suicide, said in its blog that it has implemented new safeguards for ChatGPT, including stronger detection of mental health risks and parental control features. The AI company said the updates aim to provide tighter protections around suicide-related conversations and give parents more oversight of their children’s use. Elon Musk’s AI startup, xAI, filed a federal lawsuit in Texas against Apple and OpenAI, alleging that the two companies colluded to lock up key markets and shut out rivals. OpenAI introduced its most affordable subscription plan, ChatGPT Go, in India, priced at 399 rupees per month (approximately $4.57). This move aims to expand OpenAI’s presence in its second-largest market, offering enhanced access to the latest GPT-5 model and additional features. Since its May 2023 launch, ChatGPT’s mobile app has amassed $2 billion in global consumer spending, dwarfing competitors like Claude, Copilot, and Grok by roughly 30 times, according to Appfigures. This year alone, the app has generated $1.35 billion, a 673% increase from the same period in 2024, averaging nearly $193 million per month, or 53 times more than its nearest rival, Grok. Despite unveiling GPT-5 as a “one-size-fits-all” AI, OpenAI is still offering several legacy AI options, including GPT-4o, GPT-4.1, and o3. Users can choose between new “Auto,” “Fast,” and “Thinking” modes for GPT-5, and paid subscribers regain access to legacy models like GPT-4o and GPT-4.1. Updates to ChatGPT:You can now choose between “Auto”, “Fast”, and “Thinking” for GPT-5. Most users will want Auto, but the additional control will be useful for some people.Rate limits are now 3,000 messages/week with GPT-5 Thinking, and then extra capacity on GPT-5 Thinking… OpenAI CEO Sam Altman told Reddit users that GPT-5’s “dumber” behavior at launch was due to a router issue and promised fixes, double rate limits for Plus users, and transparency on which model is answering, while also shrugging off the infamous “chart crime” from the live presentation. OpenAI released GPT-5, a next-gen AI that’s not just smarter but more useful — able to handle tasks like coding apps, managing calendars, and creating research briefs — while automatically figuring out the fastest or most thoughtful way to answer your questions. OpenAI is making a major push into federal government workflows, offering ChatGPT Enterprise to agencies for just $1 for the next year. The move comes after the U.S. General Services Administration (GSA) added OpenAI, Google, and Anthropic to its approved AI vendor list, allowing agencies to access these tools through preset contracts without negotiating pricing. OpenAI unveiled its first open source language models since GPT-2, introducing two new open-weight AI releases: gpt-oss-120b, a high-performance model capable of running on a single Nvidia GPU, and gpt-oss-20b, a lighter model optimized for laptop use. The move comes amid growing competition in the global AI market and a push for more open technology in the U.S. and abroad. ChatGPT’s rapid growth is accelerating. OpenAI said the chatbot was on track to hit 700 million weekly active users in the first week of August, up from 500 million at the end of March. Nick Turley, OpenAI’s VP and head of the ChatGPT app, highlighted the app’s growth on X, noting it has quadrupled in size over the past year. This week, ChatGPT is on track to reach 700M weekly active users — up from 500M at the end of March and 4× since last year. Every day, people and teams are learning, creating, and solving harder problems. Big week ahead. Grateful to the team for making ChatGPT more useful and… OpenAI unveiled Study Mode, a new ChatGPT feature designed to promote critical thinking by prompting students to engage with material rather than simply receive answers. The tool is now rolling out to Free, Plus, Pro, and Team users, with availability for Edu subscribers expected in the coming weeks. ChatGPT users should be cautious when seeking emotional support from AI, as the AI industry lacks safeguards for sensitive conversations, OpenAI CEO Sam Altman said on a recent episode of This Past Weekend w/ Theo Von. Unlike human therapists, AI tools aren’t bound by doctor-patient confidentiality, he noted. ChatGPT now receives 2.5 billion prompts daily from users worldwide, including roughly 330 million from the U.S. That’s more than double the volume reported by CEO Sam Altman just eight months ago, highlighting the chatbot’s explosive growth. OpenAI has introduced ChatGPT Agent, which completes a wide variety of computer-based tasks on behalf of users and combines several capabilities like Operator and Deep Research, according to the company. OpenAI says the agent can automatically navigate a user’s calendar, draft editable presentations and slideshows, run code, shop online, and handle complex workflows from end to end, all within a secure virtual environment. Researchers at Stanford University have observed that therapy chatbots powered by large language models can sometimes stigmatize people with mental health conditions or respond in ways that are inappropriate or could be harmful. While chatbots are “being used as companions, confidants, and therapists,” the study found “significant risks.” CEO Sam Altman said that the company is delaying the release of its open model, which had already been postponed by a month earlier this summer. The ChatGPT maker, which initially planned to release the model around mid-July, has indefinitely postponed its launch to conduct additional safety testing. we planned to launch our open-weight model next week.we are delaying it; we need time to run additional safety tests and review high-risk areas. we are not yet sure how long it will take us.while we trust the community will build great things with this model, once weights are… OpenAI plans to release an AI-powered web browser to challenge Alphabet’s Google Chrome. It will keep some user interactions within ChatGPT, rather than directing people to external websites. Some ChatGPT users have noticed a new feature called “Study Together” appearing in their list of available tools. This is the chatbot’s approach to becoming a more effective educational tool, rather than simply providing answers to prompts. Some people also wonder whether there will be a feature that allows multiple users to join the chat, similar to a study group. Referrals from ChatGPT to news publishers are increasing. But this rise is insufficient to offset the decline in clicks as more users now obtain their news directly from AI or AI-powered search results, according to a report by digital market intelligence company Similarweb. Since Google launched its AI Overviews in May 2024, the percentage of news searches that don’t lead to clicks on news websites has increased from 56% to nearly 69% by May 2025. OpenAI has started using Google’s AI chips to power ChatGPT and other products, as reported by Reuters. The ChatGPT maker is one of the biggest buyers of Nvidia’s GPUs, using the AI chips to train models, and this is the first time that OpenAI is using non-Nvidia chips in an important way. Researchers from MIT’s Media Lab monitored the brain activity of writers in 32 regions. They found that ChatGPT users showed minimal brain engagement and consistently fell short in neural, linguistic, and behavioral aspects. To conduct the test, the lab split 54 participants from the Boston area into three groups, each consisting of individuals ages 18 to 39. The participants were asked to write multiple SAT essays using tools such as OpenAI’s ChatGPT, the Google search engine, or without any tools. The ChatGPT app for iOS was downloaded 29.6 million times in the last 28 days, while TikTok, Facebook, Instagram, and X were downloaded a total of 32.9 million times during the same period, representing a difference of about 10.6%, according to ZDNET report citing Similarweb’s X post. Sam Altman said that the average ChatGPT query uses about one-fifteenth of a teaspoon of water, equivalent to 0.000083 gallons of water, or the energy required to power a lightbulb for a few minutes, per Business Insider. In addition to that, the chatbot requires 0.34 watt-hours of electricity to operate. OpenAI has unveiled o3-pro, an enhanced version of its o3, a reasoning model that the chatGPT maker launched earlier this year. O3-pro is available for ChatGPT and Team users and in the API, while Enterprise and Edu users will get access in the third week of June. OpenAI o3-pro is available in the model picker for Pro and Team users starting today, replacing OpenAI o1-pro.Enterprise and Edu users will get access the week after.As o3-pro uses the same underlying model as o3, full safety details can be found in the o3 system card.… OpenAI upgraded ChatGPT’s conversational voice mood for all paid users across different markets and platforms. The startup has launched an update to Advanced Voice that enables users to converse with ChatGPT out loud in a more natural and fluid sound. The feature also helps users translate languages more easily, the comapny said. OpenAI’s ChatGPT now offers new funtions for business users, including integrations with various cloud services, meeting recordings, and MCP connection support for connecting to tools for in-depth research. The feature enables ChatGPT to retrieve information across users’ own services to answer their questions. For instance, an analyst could use the company’s slide deck and documents to develop an investment thesis. OpenAI plans to purchase Jony Ive’s devices startup io for $6.4 billion. Sarah Friar, CFO of OpenAI, thinks that the hardware will significantly enhance ChatGPT and broaden OpenAI’s reach to a larger audience in the future. OpenAI has introduced its AI coding agent, Codex, powered by codex-1, a version of its o3 AI reasoning model designed for software engineering tasks. OpenAI says codex-1 generates more precise and “cleaner” code than o3. The coding agent may take anywhere from one to 30 minutes to complete tasks such as writing simple features, fixing bugs, answering questions about your codebase, and running tests. Sam Altman, the CEO of OpenAI, said during a recent AI event hosted by VC firm Sequoia that he wants ChatGPT to record and remember every detail of a person’s life when one attendee asked about how ChatGPT can become more personalized. OpenAI said in a post on X that it has launched its GPT-4.1 and GPT4.1 mini AI models in ChagGPT. By popular request, GPT-4.1 will be available directly in ChatGPT starting today.GPT-4.1 is a specialized model that excels at coding tasks & instruction following. Because it’s faster, it’s a great alternative to OpenAI o3 & o4-mini for everyday coding needs. OpenAI has launched a new feature for ChatGPT deep research to analyze code repositories on GitHub. The ChatGPT deep research feature is in beta and lets developers connect with GitHub to ask questions about codebases and engineering documents. The connector will soon be available for ChatGPT Plus, Pro, and Team users, with support for Enterprise and Education coming shortly, per an OpenAI spokesperson. After introducing a data residency program in Europe in February, OpenAI has now launched a similar program in Asian countries including India, Japan, Singapore, and South Korea. The new program will be accessible to users of ChatGPT Enterprise, ChatGPT Edu, and API. It will help organizations in Asia meet their local data sovereignty requirements when using OpenAI’s products. OpenAI is unveiling a program called OpenAI for Countries, which aims to develop the necessary local infrastructure to serve international AI clients better. The AI startup will work with governments to assist with increasing data center capacity and customizing OpenAI’s products to meet specific language and local needs. OpenAI for Countries is part of efforts to support the company’s expansion of its AI data center Project Stargate to new locations outside the U.S., per Bloomberg. OpenAI has announced its plan to make changes to its procedures for updating the AI models that power ChatGPT, following an update that caused the platform to become overly sycophantic for many users. OpenAI has released a post on the recent sycophancy issues with the default AI model powering ChatGPT, GPT-4o, leading the company to revert an update to the model released last week. CEO Sam Altman acknowledged the issue on Sunday and confirmed two days later that the GPT-4o update was being rolled back. OpenAI is working on “additional fixes” to the model’s personality. Over the weekend, users on social media criticized the new model for making ChatGPT too validating and agreeable. It became a popular meme fast. An issue within OpenAI’s ChatGPT enabled the chatbot to create graphic erotic content for accounts registered by users under the age of 18, as demonstrated by TechCrunch’s testing, a fact later confirmed by OpenAI. “Protecting younger users is a top priority, and our Model Spec, which guides model behavior, clearly restricts sensitive content like erotica to narrow contexts such as scientific, historical, or news reporting,” a spokesperson told TechCrunch via email. “In this case, a bug allowed responses outside those guidelines, and we are actively deploying a fix to limit these generations.” OpenAI has added a few features to its ChatGPT search, its web search tool in ChatGPT, to give users an improved online shopping experience. The company says people can ask super-specific questions using natural language and receive customized results. The chatbot provides recommendations, images, and reviews of products in various categories such as fashion, beauty, home goods, and electronics. OpenAI leaders have been talking about allowing the open model to link up with OpenAI’s cloud-hosted models to improve its ability to respond to intricate questions, two sources familiar with the situation told TechCrunch. OpenAI is preparing to launch an AI system that will be openly accessible, allowing users to download it for free without any API restrictions. Aidan Clark, OpenAI’s VP of research, is spearheading the development of the open model, which is in the very early stages, sources familiar with the situation told TechCrunch. OpenAI released a new AI model called GPT-4.1 in mid-April. However, multiple independent tests indicate that the model is less reliable than previous OpenAI releases. The company skipped that step — sending safety cards for GPT-4.1 — claiming in a statement to TechCrunch that “GPT-4.1 is not a frontier model, so there won’t be a separate system card released for it.” Questions have been raised regarding OpenAI’s transparency and procedures for testing models after a difference in benchmark outcomes was detected by first- and third-party benchmark results for the o3 AI model. OpenAI introduced o3 in December, stating that the model could solve approximately 25% of questions on FrontierMath, a difficult math problem set. Epoch AI, the research institute behind FrontierMath, discovered that o3 achieved a score of approximately 10%, which was significantly lower than OpenAI’s top-reported score. OpenAI has launched a new API feature called Flex processing that allows users to use AI models at a lower cost but with slower response times and occasional resource unavailability. Flex processing is available in beta on the o3 and o4-mini reasoning models for non-production tasks like model evaluations, data enrichment, and asynchronous workloads. OpenAI has rolled out a new system to monitor its AI reasoning models, o3 and o4 mini, for biological and chemical threats. The system is designed to prevent models from giving advice that could potentially lead to harmful attacks, as stated in OpenAI’s safety report. OpenAI has released two new reasoning models, o3 and o4 mini, just two days after launching GPT-4.1. The company claims o3 is the most advanced reasoning model it has developed, while o4-mini is said to provide a balance of price, speed, and performance. The new models stand out from previous reasoning models because they can use ChatGPT features like web browsing, coding, and image processing and generation. But they hallucinate more than several of OpenAI’s previous models. Open AI introduced a new section called “library” to make it easier for users to create images on mobile and web platforms, per the company’s X post. All of your image creations, all in one place.Introducing the new library for your ChatGPT image creations—rolling out now to all Free, Plus, and Pro users on mobile and https://t.co/nYW5KO1aIg. pic.twitter.com/ADWuf5fPbj OpenAI said on Tuesday that it might revise its safety standards if “another frontier AI developer releases a high-risk system without comparable safeguards.” The move shows how commercial AI developers face more pressure to rapidly implement models due to the increased competition. OpenAI is currently in the early stages of developing its own social media platform to compete with Elon Musk’s X and Mark Zuckerberg’s Instagram and Threads, according to The Verge. It is unclear whether OpenAI intends to launch the social network as a standalone application or incorporate it into ChatGPT. OpenAI will discontinue its largest AI model, GPT-4.5, from its API even though it was just launched in late February. GPT-4.5 will be available in a research preview for paying customers. Developers can use GPT-4.5 through OpenAI’s API until July 14; then, they will need to switch to GPT-4.1, which was released on April 14. OpenAI has launched three members of the GPT-4.1 model — GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano — with a specific focus on coding capabilities. It’s accessible via the OpenAI API but not ChatGPT. In the competition to develop advanced programming models, GPT-4.1 will rival AI models such as Google’s Gemini 2.5 Pro, Anthropic’s Claude 3.7 Sonnet, and DeepSeek’s upgraded V3. OpenAI plans to sunset GPT-4, an AI model introduced more than two years ago, and replace it with GPT-4o, the current default model, per changelog. It will take effect on April 30. GPT-4 will remain available via OpenAI’s API. OpenAI may launch several new AI models, including GPT-4.1, soon, The Verge reported, citing anonymous sources. GPT-4.1 would be an update of OpenAI’s GPT-4o, which was released last year. On the list of upcoming models are GPT-4.1 and smaller versions like GPT-4.1 mini and nano, per the report. OpenAI started updating ChatGPT to enable the chatbot to remember previous conversations with a user and customize its responses based on that context. This feature is rolling out to ChatGPT Pro and Plus users first, excluding those in the U.K., EU, Iceland, Liechtenstein, Norway, and Switzerland. It looks like OpenAI is working on a watermarking feature for images generated using GPT-4o. AI researcher Tibor Blaho spotted a new “ImageGen” watermark feature in the new beta of ChatGPT’s Android app. Blaho also found mentions of other tools: “Structured Thoughts,” “Reasoning Recap,” “CoT Search Tool,” and “l1239dk1.” OpenAI is offering its $20-per-month ChatGPT Plus subscription tier for free to all college students in the U.S. and Canada through the end of May. The offer will let millions of students use OpenAI’s premium service, which offers access to the company’s GPT-4o model, image generation, voice interaction, and research tools that are not available in the free version. More than 130 million users have created over 700 million images since ChatGPT got the upgraded image generator on March 25, according to COO of OpenAI Brad Lightcap. The image generator was made available to all ChatGPT users on March 31, and went viral for being able to create Ghibli-style photos. The Arc Prize Foundation, which develops the AI benchmark tool ARC-AGI, has updated the estimated computing costs for OpenAI’s o3 “reasoning” model managed by ARC-AGI. The organization originally estimated that the best-performing configuration of o3 it tested, o3 high, would cost approximately $3,000 to address a single problem. The Foundation now thinks the cost could be much higher, possibly around $30,000 per task. In a series of posts on X, OpenAI CEO Sam Altman said the company’s new image-generation tool’s popularity may cause product releases to be delayed. “We are getting things under control, but you should expect new releases from OpenAI to be delayed, stuff to break, and for service to sometimes be slow as we deal with capacity challenges,” he wrote. OpeanAI intends to release its “first” open language model since GPT-2 “in the coming months.” The company plans to host developer events to gather feedback and eventually showcase prototypes of the model. The first developer event is to be held in San Francisco, with sessions to follow in Europe and Asia. OpenAI made a notable change to its content moderation policies after the success of its new image generator in ChatGPT, which went viral for being able to create Studio Ghibli-style images. The company has updated its policies to allow ChatGPT to generate images of public figures, hateful symbols, and racial features when requested. OpenAI had previously declined such prompts due to the potential controversy or harm they may cause. However, the company has now “evolved” its approach, as stated in a blog post published by Joanne Jang, the lead for OpenAI’s model behavior. OpenAI wants to incorporate Anthropic’s Model Context Protocol (MCP) into all of its products, including the ChatGPT desktop app. MCP, an open-source standard, helps AI models generate more accurate and suitable responses to specific queries, and lets developers create bidirectional links between data sources and AI applications like chatbots. The protocol is currently available in the Agents SDK, and support for the ChatGPT desktop app and Responses API will be coming soon, OpenAI CEO Sam Altman said. The latest update of the image generator on OpenAI’s ChatGPT has triggered a flood of AI-generated memes in the style of Studio Ghibli, the Japanese animation studio behind blockbuster films like “My Neighbor Totoro” and “Spirited Away.” The burgeoning mass of Ghibli-esque images have sparked concerns about whether OpenAI has violated copyright laws, especially since the company is already facing legal action for using source material without authorization. OpenAI expects its revenue to triple to $12.7 billion in 2025, fueled by the performance of its paid AI software, Bloomberg reported, citing an anonymous source. While the startup doesn’t expect to reach positive cash flow until 2029, it expects revenue to increase significantly in 2026 to surpass $29.4 billion, the report said. OpenAI on Tuesday rolled out a major upgrade to ChatGPT’s image-generation capabilities: ChatGPT can now use the GPT-4o model to generate and edit images and photos directly. The feature went live earlier this week in ChatGPT and Sora, OpenAI’s AI video-generation tool, for subscribers of the company’s Pro plan, priced at $200 a month, and will be available soon to ChatGPT Plus subscribers and developers using the company’s API service. The company’s CEO Sam Altman said on Wednesday, however, that the release of the image generation feature to free users would be delayed due to higher demand than the company expected. Brad Lightcap, OpenAI’s chief operating officer, will lead the company’s global expansion and manage corporate partnerships as CEO Sam Altman shifts his focus to research and products, according to a blog post from OpenAI. Lightcap, who previously worked with Altman at Y Combinator, joined the Microsoft-backed startup in 2018. OpenAI also said Mark Chen would step into the expanded role of chief research officer, and Julia Villagra will take on the role of chief people officer. OpenAI has updated its AI voice assistant with improved chatting capabilities, according to a video posted on Monday (March 24) to the company’s official media channels. The update enables real-time conversations, and the AI assistant is said to be more personable and interrupts users less often. Users on ChatGPT’s free tier can now access the new version of Advanced Voice Mode, while paying users will receive answers that are “more direct, engaging, concise, specific, and creative,” a spokesperson from OpenAI told TechCrunch. OpenAI and Meta have separately engaged in discussions with Indian conglomerate Reliance Industries regarding potential collaborations to enhance their AI services in the country, per a report by The Information. One key topic being discussed is Reliance Jio distributing OpenAI’s ChatGPT. Reliance has proposed selling OpenAI’s models to businesses in India through an application programming interface (API) so they can incorporate AI into their operations. Meta also plans to bolster its presence in India by constructing a large 3GW data center in Jamnagar, Gujarat. OpenAI, Meta, and Reliance have not yet officially announced these plans. Noyb, a privacy rights advocacy group, is supporting an individual in Norway who was shocked to discover that ChatGPT was providing false information about him, stating that he had been found guilty of killing two of his children and trying to harm the third. “The GDPR is clear. Personal data has to be accurate,” said Joakim Söderberg, data protection lawyer at Noyb, in a statement. “If it’s not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.” OpenAI has added new transcription and voice-generating AI models to its APIs: a text-to-speech model, “gpt-4o-mini-tts,” that delivers more nuanced and realistic sounding speech, as well as two speech-to-text models called “gpt-4o-transcribe” and “gpt-4o-mini-transcribe”. The company claims they are improved versions of what was already there and that they hallucinate less. OpenAI has introduced o1-pro in its developer API. OpenAI says its o1-pro uses more computing than its o1 “reasoning” AI model to deliver “consistently better responses.” It’s only accessible to select developers who have spent at least $5 on OpenAI API services. OpenAI charges $150 for every million tokens (about 750,000 words) input into the model and $600 for every million tokens the model produces. It costs twice as much as OpenAI’s GPT-4.5 for input and 10 times the price of regular o1. Noam Brown, who heads AI reasoning research at OpenAI, thinks that certain types of AI models for “reasoning” could have been developed 20 years ago if researchers had understood the correct approach and algorithms. OpenAI CEO Sam Altman said, in a post on X, that the company has trained a “new model” that’s “really good” at creative writing. He posted a lengthy sample from the model given the prompt “Please write a metafictional literary short story about AI and grief.” OpenAI has not extensively explored the use of AI for writing fiction. The company has mostly concentrated on challenges in rigid, predictable areas such as math and programming. And it turns out that it might not be that great at creative writing at all. we trained a new model that is good at creative writing (not sure yet how/when it will get released). this is the first time i have been really struck by something written by AI; it got the vibe of metafiction so right.PROMPT:Please write a metafictional literary short story… OpenAI rolled out new tools designed to help developers and businesses build AI agents — automated systems that can independently accomplish tasks — using the company’s own AI models and frameworks. The tools are part of OpenAI’s new Responses API, which enables enterprises to develop customized AI agents that can perform web searches, scan through company files, and navigate websites, similar to OpenAI’s Operator product. The Responses API effectively replaces OpenAI’s Assistants API, which the company plans to discontinue in the first half of 2026. OpenAI intends to release several “agent” products tailored for different applications, including sorting and ranking sales leads and software engineering, according to a report from The Information. One, a “high-income knowledge worker” agent, will reportedly be priced at $2,000 a month. Another, a software developer agent, is said to cost $10,000 a month. The most expensive rumored agents, which are said to be aimed at supporting “PhD-level research,” are expected to cost $20,000 per month. The jaw-dropping figure is indicative of how much cash OpenAI needs right now: The company lost roughly $5 billion last year after paying for costs related to running its services and other expenses. It’s unclear when these agentic tools might launch or which customers will be eligible to buy them. The latest version of the macOS ChatGPT app allows users to edit code directly in supported developer tools, including Xcode, VS Code, and JetBrains. ChatGPT Plus, Pro, and Team subscribers can use the feature now, and the company plans to roll it out to more users like Enterprise, Edu, and free users. According to a new report from VC firm Andreessen Horowitz (a16z), OpenAI’s AI chatbot, ChatGPT, experienced solid growth in the second half of 2024. It took ChatGPT nine months to increase its weekly active users from 100 million in November 2023 to 200 million in August 2024, but it only took less than six months to double that number once more, according to the report. ChatGPT’s weekly active users increased to 300 million by December 2024 and 400 million by February 2025. ChatGPT has experienced significant growth recently due to the launch of new models and features, such as GPT-4o, with multimodal capabilities. ChatGPT usage spiked from April to May 2024, shortly after that model’s launch. OpenAI has effectively canceled the release of o3 in favor of what CEO Sam Altman is calling a “simplified” product offering. In a post on X, Altman said that, in the coming months, OpenAI will release a model called GPT-5 that “integrates a lot of [OpenAI’s] technology,” including o3, in ChatGPT and its API. As a result of that roadmap decision, OpenAI no longer plans to release o3 as a standalone model. A commonly cited stat is that ChatGPT requires around 3 watt-hours of power to answer a single question. Using OpenAI’s latest default model for ChatGPT, GPT-4o, as a reference, nonprofit AI research institute Epoch AI found the average ChatGPT query consumes around 0.3 watt-hours. However, the analysis doesn’t consider the additional energy costs incurred by ChatGPT with features like image generation or input processing. In response to pressure from rivals like DeepSeek, OpenAI is changing the way its o3-mini model communicates its step-by-step “thought” process. ChatGPT users will see an updated “chain of thought” that shows more of the model’s “reasoning” steps and how it arrived at answers to questions. OpenAI is now allowing anyone to use ChatGPT web search without having to log in. While OpenAI had previously allowed users to ask ChatGPT questions without signing in, responses were restricted to the chatbot’s last training update. This only applies through ChatGPT.com, however. To use ChatGPT in any form through the native mobile app, you will still need to be logged in. OpenAI announced a new AI “agent” called deep research that’s designed to help people conduct in-depth, complex research using ChatGPT. OpenAI says the “agent” is intended for instances where you don’t just want a quick answer or summary, but instead need to assiduously consider information from multiple websites and other sources. OpenAI used the subreddit r/ChangeMyView to measure the persuasive abilities of its AI reasoning models. OpenAI says it collects user posts from the subreddit and asks its AI models to write replies, in a closed environment, that would change the Reddit user’s mind on a subject. The company then shows the responses to testers, who assess how persuasive the argument is, and finally OpenAI compares the AI models’ responses to human replies for that same post. OpenAI launched a new AI “reasoning” model, o3-mini, the newest in the company’s o family of models. OpenAI first previewed the model in December alongside a more capable system called o3. OpenAI is pitching its new model as both “powerful” and “affordable.” A new report from app analytics firm Appfigures found that over half of ChatGPT’s mobile users are under age 25, with users between ages 50 and 64 making up the second largest age demographic. The gender gap among ChatGPT users is even more significant. Appfigures estimates that across age groups, men make up 84.5% of all users. OpenAI launched ChatGPT Gov designed to provide U.S. government agencies an additional way to access the tech. ChatGPT Gov includes many of the capabilities found in OpenAI’s corporate-focused tier, ChatGPT Enterprise. OpenAI says that ChatGPT Gov enables agencies to more easily manage their own security, privacy, and compliance, and could expedite internal authorization of OpenAI’s tools for the handling of non-public sensitive data. Younger Gen Zers are embracing ChatGPT, for schoolwork, according to a new survey by the Pew Research Center. In a follow-up to its 2023 poll on ChatGPT usage among young people, Pew asked ~1,400 U.S.-based teens ages 13 to 17 whether they’ve used ChatGPT for homework or other school-related assignments. Twenty-six percent said that they had, double the number two years ago. Just over half of teens responding to the poll said they think it’s acceptable to use ChatGPT for researching new subjects. But considering the ways ChatGPT can fall short, the results are possibly cause for alarm. OpenAI says that it might store chats and associated screenshots from customers who use Operator, the company’s AI “agent” tool, for up to 90 days — even after a user manually deletes them. While OpenAI has a similar deleted data retention policy for ChatGPT, the retention period for ChatGPT is only 30 days, which is 60 days shorter than Operator’s. OpenAI is launching a research preview of Operator, a general-purpose AI agent that can take control of a web browser and independently perform certain actions. Operator promises to automate tasks such as booking travel accommodations, making restaurant reservations, and shopping online. Operator, OpenAI’s agent tool, could be released sooner rather than later. Changes to ChatGPT’s code base suggest that Operator will be available as an early research preview to users on the $200 Pro subscription plan. The changes aren’t yet publicly visible, but a user on X who goes by Choi spotted these updates in ChatGPT’s client-side code. TechCrunch separately identified the same references to Operator on OpenAI’s website. OpenAI has begun testing a feature that lets new ChatGPT users sign up with only a phone number — no email required. The feature is currently in beta in the U.S. and India. However, users who create an account using their number can’t upgrade to one of OpenAI’s paid plans without verifying their account via an email. Multi-factor authentication also isn’t supported without a valid email. ChatGPT’s new beta feature, called tasks, allows users to set simple reminders. For example, you can ask ChatGPT to remind you when your passport expires in six months, and the AI assistant will follow up with a push notification on whatever platform you have tasks enabled. The feature will start rolling out to ChatGPT Plus, Team, and Pro users around the globe this week. OpenAI is introducing a new way for users to customize their interactions with ChatGPT. Some users found they can specify a preferred name or nickname and “traits” they’d like the chatbot to have. OpenAI suggests traits like “Chatty,” “Encouraging,” and “Gen Z.” However, some users reported that the new options have disappeared, so it’s possible they went live prematurely. ChatGPT is a general-purpose chatbot that uses artificial intelligence to generate text after a user enters a prompt, developed by tech startup OpenAI. The chatbot uses GPT-4, a large language model that uses deep learning to produce human-like text. November 30, 2022 is when ChatGPT was released for public use. Both the free version of ChatGPT and the paid ChatGPT Plus are regularly updated with new GPT models. The most recent model is GPT-4o. There is a free version of ChatGPT that only requires a sign-in in addition to the paid version, ChatGPT Plus. Anyone can use ChatGPT! More and more tech companies and search engines are utilizing the chatbot to automate text or quickly answer user questions/concerns. Multiple enterprises utilize ChatGPT, although others may limit the use of the AI-powered tool. Most recently, Microsoft announced at its 2023 Build conference that it is integrating its ChatGPT-based Bing experience into Windows 11. A Brooklyn-based 3D display startup Looking Glass utilizes ChatGPT to produce holograms you can communicate with by using ChatGPT. And nonprofit organization Solana officially integrated the chatbot into its network with a ChatGPT plug-in geared toward end users to help onboard into the web3 space. GPT stands for Generative Pre-Trained Transformer. A chatbot can be any software/system that holds dialogue with you/a person but doesn’t necessarily have to be AI-powered. For example, there are chatbots that are rules-based in the sense that they’ll give canned responses to questions. ChatGPT is AI-powered and utilizes LLM technology to generate text after a prompt. Yes. Due to the nature of how these models work, they don’t know or care whether something is true, only that it looks true. That’s a problem when you’re using it to do your homework, sure, but when it accuses you of a crime you didn’t commit, that may well at this point be libel. We will see how handling troubling statements produced by ChatGPT will play out over the next few months as tech and legal experts attempt to tackle the fastest moving target in the industry. Yes, there is a free ChatGPT mobile app for iOS and Android users. It’s not documented anywhere that ChatGPT has a character limit. However, users have noted that there are some character limitations after around 500 words. Yes, it was released March 1, 2023. Everyday examples include programming, scripts, email replies, listicles, blog ideas, summarization, etc. Advanced use examples include debugging code, programming languages, scientific concepts, complex problem solving, etc. It depends on the nature of the program. While ChatGPT can write workable Python code, it can’t necessarily program an entire app’s worth of code. That’s because ChatGPT lacks context awareness — in other words, the generated code isn’t always appropriate for the specific context in which it’s being used. Yes. OpenAI allows users to save chats in the ChatGPT interface, stored in the sidebar of the screen. There are no built-in sharing features yet. Yes. There are multiple AI-powered chatbot competitors such as Together, Google’s Gemini and Anthropic’s Claude, and developers are creating open source alternatives. OpenAI has said that individuals in “certain jurisdictions” (such as the EU) can object to the processing of their personal information by its AI models by filling out this form. This includes the ability to make requests for deletion of AI-generated references about you. Although OpenAI notes it may not grant every request since it must balance privacy requests against freedom of expression “in accordance with applicable laws”. The web form for making a deletion of data about you request is entitled “OpenAI Personal Data Removal Request”. In its privacy policy, the ChatGPT maker makes a passing acknowledgement of the objection requirements attached to relying on “legitimate interest” (LI), pointing users towards more information about requesting an opt out — when it writes: “See here for instructions on how you can opt out of our use of your information to train our models.” Recently, Discord announced that it had integrated OpenAI’s technology into its bot named Clyde where two users tricked Clyde into providing them with instructions for making the illegal drug methamphetamine (meth) and the incendiary mixture napalm. An Australian mayor has publicly announced he may sue OpenAI for defamation due to ChatGPT’s false claims that he had served time in prison for bribery. This would be the first defamation lawsuit against the text-generating service. CNET found itself in the midst of controversy after Futurism reported the publication was publishing articles under a mysterious byline completely generated by AI. The private equity company that owns CNET, Red Ventures, was accused of using ChatGPT for SEO farming, even if the information was incorrect. Several major school systems and colleges, including New York City Public Schools, have banned ChatGPT from their networks and devices. They claim that the AI impedes the learning process by promoting plagiarism and misinformation, a claim that not every educator agrees with. There have also been cases of ChatGPT accusing individuals of false crimes. Several marketplaces host and provide ChatGPT prompts, either for free or for a nominal fee. One is PromptBase. Another is ChatX. More launch every day. Poorly. Several tools claim to detect ChatGPT-generated text, but in our tests, they’re inconsistent at best. No. But OpenAI recently disclosed a bug, since fixed, that exposed the titles of some users’ conversations to other people on the service. None specifically targeting ChatGPT. But OpenAI is involved in at least one lawsuit that has implications for AI systems trained on publicly available data, which would touch on ChatGPT. Yes. Text-generating AI models like ChatGPT have a tendency to regurgitate content from their training data. This story is continually updated with new information. Topics Reporter, Asia Kate Park is a reporter at TechCrunch, with a focus on technology, startups and venture capital in Asia. She previously was a financial journalist at Mergermarket covering M&A, private equity and venture capital. AI Editor Audience Development Manager Alyssa Stringer was formerly the Audience Development Manager for TechCrunch. She previously worked for HW Media as Audience Development Manager across HousingWire, RealTrends and FinLedger media brands. Prior to her experience in audience development, Alyssa worked as a content writer and holds a Bachelor’s in Journalism at the University of North Texas. Save up to $680 on your pass before February 27.Meet investors. Discover your next portfolio company. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building what’s next. Don’t miss these one-time savings. Most Popular FBI says ATM ‘jackpotting’ attacks are on the rise, and netting hackers millions in stolen cash Meta’s own research found parental supervision doesn’t really help curb teens’ compulsive social media use How Ricursive Intelligence raised $335M at a $4B valuation in 4 months After all the hype, some AI experts don’t think OpenClaw is all that exciting OpenClaw creator Peter Steinberger joins OpenAI Hollywood isn’t happy about the new Seedance 2.0 video generator The great computer science exodus (and where students are going instead) © 2025 TechCrunch Media LLC. |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.