text
stringlengths
0
473k
[SOURCE: https://en.wikipedia.org/wiki/History_of_the_Jews_in_Latin_America_and_the_Caribbean] | [TOKENS: 10790]
Contents History of the Jews in Latin America and the Caribbean The history of the Jews in Latin America and the Caribbean began with conversos who joined the Spanish and Portuguese expeditions to the continents. The Alhambra Decree of 1492 led to the mass conversion of Spain's Jews to Catholicism and the expulsion of those who refused to do so. Many conversos, Jews who converted to Christianity under pressure during the Spanish Inquisition, did travel to the New World. While the Spanish Crown required settlers to be of Catholic lineage, conversos often presented themselves as devout Catholics to meet this requirement. Some sought refuge in the Americas to escape persecution of the Inquisition, which followed them even to the Spanish viceregal towns. In places like Mexico and New Mexico, conversos maintained their faith in secret while outwardly adhering to Catholic practices. Their migration was driven by both the hope for greater economic opportunities and the desire to escape religious oppression. The first Jews came with the first expedition of Christopher Columbus, including Rodrigo de Triana and Luis De Torres. Throughout the 15th and 16th centuries a number of converso families migrated to the Netherlands, France and eventually Italy, from where they joined other expeditions to the Americas. Others migrated to England or France and accompanied their colonists as traders and merchants. By the late 16th century, fully functioning Jewish communities were founded in the Portuguese colony of Brazil, the Dutch Suriname and Curaçao; Spanish Santo Domingo, and the English colonies of Jamaica and Barbados. In addition, there were unorganized communities of Jews in Spanish and Portuguese territories where the Inquisition was active, including Colombia, Cuba, Puerto Rico, Mexico and Peru. Many in such communities were crypto-Jews, who had generally concealed their identity from the authorities. By the mid-17th century, the largest Jewish communities in the Western Hemisphere were located in Suriname and Brazil. Several Jewish communities in the Caribbean, Central and South America flourished, particularly in those areas under Dutch and English control, which were more tolerant. More immigrants went to this region as part of the massive emigration of Jews from eastern Europe in the late 19th century. During and after World War II, many Ashkenazi Jews emigrated to South America for refuge. In the 21st century, fewer than 300,000 Jews live in Latin America. They are concentrated in Argentina, Brazil, Chile, Cuba, Mexico and Uruguay. Argentina Jews fleeing the Inquisition settled in Argentina, where they intermarried with native women. Portuguese traders and smugglers in the Virreinato del Río de la Plata were considered by many to be crypto-Jewish, but no community emerged after Argentina achieved independence. After 1810 (and about mid-nineteenth century), more Jews, especially from France, began to settle in Argentina. By the end of the century in Argentina, as in America, many Jewish immigrants were coming from Eastern Europe (mainly Russia and Poland) fleeing Tsarist persecution. Upon arrival they were generally called "Russians" in reference to their region of origin. Jewish individuals and families emigrated from Europe to Argentina before and after World War II, in an attempt to escape the Holocaust and later postwar antisemitism. Between 250,000 and 300,000 Jews now live in Argentina, the vast majority of whom reside in the cities of Buenos Aires, Rosario, Córdoba, Mendoza, La Plata and San Miguel de Tucumán. Argentina has the third-largest Jewish community in the Americas after the United States and Canada, and the sixth largest in the world. According to recent surveys, more than a million Argentines have at least one grandparent of Jewish ethnicity. The Jewish Argentine community legally receives seven holidays per year, with both days of Rosh Hashanah, Yom Kippur, and the first and last two days of Passover, according to the law 26,089. Bahamas 200 Jews lived in the Bahamas in 2022. Bolivia Jewish presence in Bolivia started at the beginning of the Spanish colonial period. Santa Cruz de la Sierra, was founded in 1557 by Ñuflo de Chávez who was accompanied by a small group of pioneers, including several crypto-Jews from Ascuncion and Buenos Aires. The city became known as a safe haven for Jews during the Inquisition in the region. The second wave of Conversos came to Santa Cruz de la Sierra after 1570, when the Spanish Inquisition began operating in Lima. Alleged marranos (that is, New Christians whom others rightly or wrongly suspected of crypto-Judaism), settled in Potosi, La Paz and La Plata. After they gained economic success in mining and commerce, they faced suspicion and persecution from the Inquisition and local authorities. Most of these marrano families moved to Santa Cruz de la Sierra, as it was an isolated urban settlement where the Inquisition did not bother the conversos. Most of the converso settlers were men, and many intermarried with indigenous or mestizo women, founding mixed-race or mestizo families. Conversos also settled in adjacent towns of Vallegrande, Postrervalle, Portachuelo, Terevinto, Pucara, Cotoca and others. Many of Santa Cruz's oldest families are of partial Jewish heritage; Some traces of Jewish culture can still be found in family traditions, as well as local customs. For example, some families have family-heirloom seven-branched candle sticks or the custom of lighting candles on Friday at sunset. The typical local dishes can be all prepared with kosher practices (none mix milk and meat, pork is served, but never mixed with other foods). Scholars disagree on provenance and recency of these practices. After almost five centuries, some of the descendants of these families claim awareness of Jewish origins, but practice Catholicism (in certain cases with some Jewish syncretism). From independence in 1825 to the end of the 19th century, some Jewish merchants and traders (both Sephardim and Ashkenazim) immigrated to Bolivia. Most took local women as wives, founding families that eventually merged into the mainstream Catholic society. This was often the case in the eastern regions of Santa Cruz, Tarija, Beni and Pando, where these merchants came from Brazil or Argentina. During the 20th century, substantial Jewish settlement began in Bolivia. In 1905, a group of Russian Jews, followed by Argentines, settled in Bolivia. In 1917, it was estimated that there were 20 to 25 professing Jews in the country. By 1933, when the Nazi era in Germany started, there were 30 Jewish families. The first large Jewish immigration occurred during the 1930s; the population had climbed to an estimated 8,000 at the end of 1942. During the 1940s, 2,200 Jews emigrated from Bolivia to other countries. But the ones who remained have created communities in La Paz, Cochabamba, Oruro, Santa Cruz, Sucre, Tarija and Potosí. After World War II, a small number of Polish Jews immigrated to Bolivia. By 2006, approximately 700 Jews remained in Bolivia. There are synagogues in the cities of Santa Cruz de la Sierra, La Paz, and Cochabamba. Most Bolivian Jews live in Santa Cruz de la Sierra. Brazil Jews settled early in Brazil, especially in areas of Dutch rule. They set up a synagogue in Recife in 1636, which is considered the first synagogue in the Americas. Most of these Jews were conversos who had fled Spain and Portugal to the religious freedom of the Netherlands when the Inquisition began in Portugal in 1536. In 1656, following the Portuguese reconquest of Brazil, Jews left for the Caribbean islands and New Amsterdam under Dutch rule; the latter was taken over by the English in 1664 and was renamed as New York City. After independence in the 19th century, Brazil attracted more Jews among its immigrants, and pressure in Europe convinced more Jews to leave. Jewish immigration rose throughout the 19th and early 20th centuries, at a time of massive emigration from the Russian Empire (including Poland and Ukraine). Jewish immigration to Brazil was rather low between 1881 and 1900 although this was the height of other international immigration to Brazil; many were going to more industrialized countries. Between 1921 and 1942 worldwide immigration to Brazil fell by 21%, but Jewish immigration to Brazil increased by 57,000. This was in response to anti-immigration legislation and immigration quotas passed by the United States, Argentina, Canada and South Africa, persisting even after the crisis of Jews under the Third Reich became clear. The Brazilian government generally did not enforce its own immigration legislation. Lastly, the Jews in Brazil developed strong support structures and economic opportunities, which attracted Eastern European and Polish Jewish immigration. Brazil has the 9th largest Jewish community in the world, about 107,329 by 2010, according to the IBGE census. The Jewish Confederation of Brazil (CONIB) estimates that there are more than 120,000 Jews in Brazil. Brazilian Jews play an active role in politics, sports, academia, trade and industry, and are well integrated in all spheres of Brazilian life. The majority of Brazilian Jews live in the state of São Paulo, but there are also sizable communities in Rio de Janeiro, Rio Grande do Sul, Minas Gerais and Paraná. Chile Although a relatively small community amounting to no more than 1% of the country's religious minorities, Jews in Chile have achieved prominent positions in its society. They have had key roles both before and after its independence in 1810. Most Chilean Jews today reside in Santiago and Valparaíso, but there are significant communities in the north and south of the country. Mario Kreutzberger, otherwise known as "Don Francisco" and host of Sábado Gigante, the longest-running TV show in the world, is a Chilean Jew of German origin. Other Chilean Jews who have achieved recognition in arts and culture include Alejandro Jodorowsky, now established in France and best known internationally for his literary and filmic work. Others include Nissim Sharim (actor), Shlomit Baytelman (actress) and Anita Klesky (actress). Volodia Teitelboim, poet and former leader of the Chilean Communist Party, is one of the many Jews to have held important political positions in the country. Tomás Hirsch is leader of the radical Green-Communist coalition and former presidential candidate in 2005. State ministers Karen Poniachick (Minister for Mining) and Clarisa Hardy (Minister for Social Affairs) are also Jewish. In the field of sport, tennis player Nicolás Massú (gold medalist in Athens 2004 and former top-ten in the ATP rankings) has Jewish background. Many of the country's most important companies, particularly in the retail and commercial field, have been set up by Jews. Examples are Calderón, Gendelman, Hites, and Pollak (commercial retailers) and Rosen (Mattress and Bed Industries). Colombia "New Christians", fled the Iberian peninsula to escape persecution and seek religious freedom during the 16th and 17th centuries. It is estimated that some reached northern areas of Colombia, which at the time was known as New Granada. Most if not all of these people assimilated into Colombian society. Some continue to practice traces of Sephardic Jewish rituals as family traditions. In the 18th century, practicing Spanish and Portuguese Jews came from Jamaica and Curaçao, where they had flourished under English and Dutch rule. These Jews started practicing their religion openly in Colombia at the end of the 18th century, although it was not officially legal to do so, given the established Catholic Church. After independence, Judaism was recognized as a legal religion. The government granted the Jews land for a cemetery. Many Jews who came during the 18th and 19th centuries achieved prominent positions in Colombian society. Some married local women and felt they had to abandon or diminish their Jewish identity. These included author Jorge Isaacs of English Jewish ancestry, the industrialist James Martin Eder (who adopted the more Christian name of Santiago Eder when he translated his name to Spanish) born into the Latvian Jewish community, as well as the De Lima, Salazar, Espinoza, Arias, Ramirez, Perez and Lobo families of Caribbean Sephardim. Coincidentally, these persons and their families settled in the Cauca Valley region of Colombia. They have continued to be influential members of society in cities such as Cali. Over the generations most of their descendants were raised as secular Christians. During the early part of the 20th century, numerous Sephardic Jewish immigrants came from Greece, Turkey, North Africa and Syria. Shortly after, Jewish immigrants began to arrive from Eastern Europe. A wave of immigrants came after the rise of Nazism in 1933 and the imposition of antisemitic laws and practices, including more than 7,000 German Jews. From 1939 until the end of World War II, immigration was put to a halt by anti-immigrant feelings in the country and restrictions on immigration from Germany. Colombia asked Germans who were on the U.S. blacklist to leave and allowed Jewish refugees in the country illegally to stay. The Jewish population increased dramatically in the 1950s and 1960s, and institutions such as synagogues, schools and social clubs were established throughout the largest cities in the country. The changing economy and wave of kidnappings during the last decade of the 20th century led many members of Colombia's Jewish community to emigrate. Most settled in Miami and other parts of the United States. Successes in the nation's Democratic security Policy has encouraged citizens to return; it has drastically reduced violence in the rural areas and criminality rates in urban areas, as well as in spurring the economy. The situation in Colombia has improved to the extent that many Venezuelan Jews are now seeking refuge in Colombia. In the early 21st century, most of the Jews in Colombia are concentrated in Bogotá, with about 20,000 members, and Barranquilla, with about 7,000 members. Large communities are found in Cali and Medellín, but very few practicing Jews. Smaller communities are found in Cartagena and the island of San Andres. There are 14 official synagogues throughout the country. In Bogotá, Jews each run their own religious and cultural institutions. The Confederación de Asociaciones Judías de Colombia, located in Bogotá, is the central organization that coordinates Jews and Jewish institutions in Colombia. In the new millennium, after years of study, a group of Colombians with Jewish ancestry formally converted to Judaism to be accepted as Jews according to the halakha. Costa Rica The first Jews in Costa Rica were probably conversos, who arrived in the 16th and 17th centuries with Spanish expeditions. In the 19th century Sephardic merchants from Curaçao, Jamaica, Panama and the Caribbean followed. They lived mostly in Central Valley, married local women, and were soon assimilated into the country's general society. Most eventually gave up Judaism altogether. A third wave of Jewish immigrants came before World War I and especially in the 1930s, as Ashkenazi Jews fled a Europe threatened by Nazi Germany. Most of these immigrants came from the Polish town Żelechów. The term Polacos, which was originally a slur referring to these immigrants, has come to mean door-to-door salesman in colloquial Costa Rican Spanish. The country's first synagogue, the Orthodox Shaarei Zion, was built in 1933 in the capital San José (it is located along 3rd Avenue and 6th Street). Along with a wave of nationalism, in the 1940s there was some antisemitism in Costa Rica, but generally there have been few problems. Since the late 20th century there has been a fourth wave of Jewish immigration made up of American and Israeli expatriates who are retiring here or doing business in the country. The Jewish community is estimated to number 2,500 to 3,000 people, most of them living in the capital. The San José suburb of Rohrmoser has a strong Jewish influence due to its residents. A couple of synagogues are located here, as well as a kosher deli and restaurant. The Plaza Rohrmoser shopping center had the only kosher Burger King in the country. The Centro Israelita Sionista (Zionist Israeli Center) is a large Orthodox compound where a synagogue, library and museum are located. In 2015, the Chaim Weizmann comprehensive school in San Jose had over 300 students in kindergarten, primary, and secondary grades learning in both Spanish and Hebrew. Cuba Jews have lived on the island of Cuba for centuries. Some Cubans trace Jewish ancestry to crypto-Jews, called Marranos, who fled the Spanish Inquisition. Early colonists generally married native women and few of their descendants, after centuries of residence, practice Judaism today. There was significant Jewish immigration to Cuba in the first half of the 20th century, as noted in other countries of Latin America. During this time, Beth Shalom Temple in Havana was constructed and became the most prominent Latin American Jewish synagogue. There were 15,000 Jews in Cuba in 1959, but many Jewish businessmen and professionals left Cuba for the United States after the Cuban revolution, fearing class persecution under the Communists. In the early 1990s, Operation Cigar was launched, and in the period of five years, more than 400 Cuban Jews secretly immigrated to Israel. In February 2007 The New York Times estimated that about 1,500 Jews live in Cuba, most of them (about 1,000) in Havana. Beth Shalom Temple is an active synagogue that serves many Cuban Jews. Curaçao Curaçao has the oldest active Jewish congregation in the Americas—dating to 1651—and the oldest synagogue of the Americas, in continuous use since its completion in 1732 on the site of a previous synagogue. The Jewish community of Curaçao also played a key role in supporting early Jewish congregations in the United States in the 18th and 19th centuries, including in New York City and Newport, Rhode Island, where the Touro Synagogue was built. Growth in Latin American Jewish communities, primarily in Colombia and Venezuela, resulted from the influx of Curaçaoan Jews. In 1856 and 1902 the Jews of Coro (Venezuela) were plundered, maltreated, and driven to seek refuge in their native Curaçao. Dominican Republic Converso Merchants of Sephardic origin arrived in southern Hispaniola during the 15th, 16th and 17th centuries, fleeing the outcome of the Spanish Inquisition. Over the centuries, many Jews and their descendants assimilated into the general population and some have converted into the Catholic religion, although many of the country's Jews still retain elements of the Sephardic culture of their ancestors. Later, in the 18th and 19th centuries, many Sephardic families from Curaçao emigrated to the Dominican Republic. Sosua, meanwhile, is a small town close to Puerto Plata was founded by Jews fleeing the rising Nazi regime of the 1930s. Rafael Trujillo, the country's dictator, welcomed many Jewish refugees to his island mainly for their skills rather than for religious persecution. Present-day Sosua still possesses a synagogue and a museum of Jewish history. Descendants of those Jews can still be found in many other villages and towns on the north of the island close to Sosua.[citation needed] Ecuador For some time, prior to the 20th century, many Jews in Ecuador were of Sephardic ancestry and some retained their use of the Judaeo-Spanish (Ladino) language. However, today, most Jewish people in Ecuador are of Ashkenazi ancestry. Some assume that these groups were among the European settlers of Ecuador. Many Jewish people came from Germany in 1939, on a ship called the "Koenigstein". During the years 1933–43, there were a population of 2,700 Jewish immigrants. In 1939, the Jewish population, mostly German and Polish Jews, were expelled by a decree of the Italian influenced government of Alberto Enriquez Gallo. The antisemitism spread in the population, but was stopped by the intervention of the American embassy. In 1945, there was a reported population of 3,000. About 85% of them were European refugees. The rise of Jewish immigration to Ecuador was when the Holocaust started. In 1950, there was an estimation of 4,000 persons living in Ecuador. Most of the active Jewish communities in Ecuador are from German origin. The majority of Ecuadorian Jews live in Quito and Guayaquil. There is a Jewish school in Quito. In Guayaquil, there is a Jewish Community under the auspices of Los Caminos de Israel called Nachle Emuna Congregation. Now in 2017 in Ecuador there are only 290 reported Jews in the country. "Among the Jewish immigrants who came to Ecuador were also professionals, intellectuals and artists, some of whom were professors and writers. Other Alberto Capua, Giorgio Ottolenghi, Aldo Mugla, Francisco Breth, Hans Herman, Leopold Levy, Paul Engel, Marco Turkel, Henry Fente, Benno Weiser, Otto Glass, Egon Fellig, and Karl Kohn. Olga Fis valued and spread the Ecuadorian folk art, Constanza Capua conducted archaeological, anthropological and colonial art. From Sephardic ancestry were Leonidas Gilces and his younger brother Angel Theodore Gilces whom helped many immigrants such as Charles Liebman who reach the capital with his library, which became the most important of the capital. Simon Goldberg who had a library in Berlin, Goethe library of old books that contributed to the dissemination of reading. Vera Kohn was a psychologist and teacher, tasks that at mid-century were not of interest of Ecuadorian women who used to live in their homes given away, devoid of intellectual curiosity and only care about social life. They were not interested in politics, with the exception of Paul Beter, belonging to the second generation of Jews, who became Minister of Economy and Central Bank President. El Salvador The first Jews arrived in El Salvador as the first Spanish Settlers since the 16th century , the conversos who practiced Judaism in secret. Alsatian-born Bernardo Haas, who came to El Salvador in 1868, was believed to be the country's first "recognized" Jewish immigrant. Another Jew, Leon Libes, was documented as the first German Jew in 1888. Sephardic families also arrived from countries such as Turkey, Egypt, Tunisia Spain and France. De Sola helped to found the first synagogue and became an invaluable member of the Jewish community. In 1936, World War II caused the Jewish community to help their ancestors escape from Europe. Some had their relatives in El Salvador. But some were forced to go into countries such as Brazil, Ecuador, Guatemala and Panama. On 30 July 1939, President Martinez barred an entry of fifty Jewish refugees going to El Salvador on the German ship Portland. On 11 September 1948, the community started and continues to support a school "Colegio Estado de Israel". According to the latest Census, there are currently about 100 Jews living in El Salvador, mostly in the capital city of San Salvador. Most of them have Sephardic roots. There is a small town called Armenia in rural El Salvador where people practice Orthodox Sephardic Judaism since the inquisition. French Guiana History of the Jews in French Guiana redirects here. Jews arrived in French Guiana by the way of the Dutch West India Company. Later on 12 September 1659, Jews arrived from Dutch colonies in Brazil. The company appointed David Nassy, a Brazilian refugee, patron of an exclusive Jewish settlement on the western side of the island of Cayenne, an area called Remire or Irmire. From 1658 to 1659, Paulo Jacomo Pinto began negotiating with the Dutch authorities in Amsterdam to allow a group of Jews from Livorno, Italy to settle in the Americas. On 20 July 1600, more than 150 Sephardic Jews left Livorno (Leghorn) and settled in Cayenne. The French agreed to those terms, an exceptional policy that was not common among the French colonies. Nevertheless, nearly two-thirds of the population left for the Dutch colony of Suriname. Over the decades, the Leghorn Jews of Cayenne immigrated to Suriname. In 1667, the remaining Jewish community was captured by the occupying British forces and moved the population to either Suriname or Barbados to work in sugarcane production. Since the late 17th century, few Jews have lived in French Guiana. In 1992, 20 Jewish families from Suriname and North Africa attempted to re-establish the community in Cayenne. A Chabad organization exists in the country and maintains Jewish life within the community. Today, 800 Jews live in French Guiana, predominately in Cayenne. Guatemala History of the Jews in Guatemala redirects here. The first Jewish migrations to Guatemala date back to the Spanish period. Historical records from the Mexican Inquisition reveal that the earliest Jewish settlers were Crypto-Jews and converts. And there is still descendants of them. ( mainly converted ) As in other spanish america countries. The modern Jewish community in Guatemala, however, traces its roots to German Jewish immigrants who arrived in the mid-19th century. The Jews in Guatemala are mainly descendants from immigrants from Germany, Eastern Europe and the Middle East that arrived in the second half of the 19th century and first half of the 20th. The first Jewish families arrived from the town of Kempen, Posen, Prussia (today Kepno, Poland), establishing themselves in Guatemala City and Quetzaltenango. Immigrants from the Middle East (mainly Turkey) immigrated during the first three decades of the 20th century. Many immigrated during World War II. There are approximately 900 Jews living in Guatemala today. Most live in Guatemala City. Today, the Jewish community in Guatemala is made up of Orthodox Jews, Sephardi, Eastern European and German Jews. In 2014, numerous members of the communities Lev Tahor and Toiras Jesed, who practice a particularly austere form of Orthodox Judaism, began settling in the village of San Juan La Laguna. Mainstream Jewish communities felt concerned about the reputation following this group, who had left both the US and Canada under allegations of child abuse, underage marriage and child neglect. Despite the tropical heat, the members of the community continued to wear the long black cloaks for men and full black chador for women. Haiti When Christopher Columbus arrived in Santo Domingo, as he named it, among his crew was an interpreter, Luis de Torres, who was Jewish. Luis was one of the first Jews to settle on Santo Domingo in 1492. When the western part of the island was taken over by France in 1633, many Dutch Sephardic Jews came from Curaçao, arriving in 1634, after the Portuguese had taken over there. Others immigrated from English colonies such as Jamaica, contributing to the merchant trade. In 1683, Louis XIV banned all religions except Catholicism in the French colonies, and ordered the expulsion of Jews, but this was lightly enforced. Sephardic Jews remained in Saint-Domingue as leading officials in French trading companies. After the French Revolution instituted religious freedom in 1791, additional Jewish merchants returned to Saint-Domingue and settled in several cities. Some likely married free women of color, establishing families. In the 21st century, archaeologists discovered a synagogue of Crypto-Jews in Jérémie in the southwest area of the island. In Cap-Haïtien, Cayes and Jacmel, a few Jewish tombstones have been uncovered. In the late eighteenth century at the time of the French Revolution, the free people of color pressed for more rights in Saint-Domingue, and a slave revolt led by Toussaint L'Ouverture broke out in 1791 in the North of the island. Slaves considered Jews to be among the white oppressor group.[citation needed] Through the years of warfare, many people of the Jewish community were among the whites killed; some Jews were expelled when the slaves and free blacks took power and instituted restrictions on foreign businessmen.[citation needed] Haiti achieved independence in 1804 but was not recognized by other nations for some time and struggled economically, based on a peasant culture producing coffee as a commodity crop. Foreigners were prohibited from owning land and subject to other restrictions. Planters and other whites were killed in 1805, and Jews were among the whites and people of color who fled to the United States, many settling in New Orleans or Charleston. Race, as defined in slavery years, and nationality became more important in Haiti in the 19th century than religion, and Jews were considered whites and nationals of their groups. Later in the century, Polish Jews immigrated to Haiti due to the civil strife in Poland and settled in Cazale, in the North-West region of the country. Most Jews settled in port cities, where they worked as traders and merchants. In 1881 a crowd in Port-au-Prince attacked a group of Jews but was drawn back by militia men. By the end of the 19th century, a small number of Mizrahi Jewish families immigrated to Haiti from Lebanon, Syria and Egypt; there were a higher number of Levantine Christian traders arriving at the same time. German Jews arrived with other German businessmen; they were highly acculturated and were considered part of the German community. In 1915, there were 200 Jews in Haiti. During the 20 years of American occupation, many of the Jews emigrated to the United States. The US and Haiti had joint interests in reducing the number and influence of foreign businessmen. In 1937, the government issued passports and visas to Jews of Germany and Eastern Europe, to help them escape the Nazi persecution. They retained control of any naturalization of foreigners, restricting it. During this time, 300 Jews lived on the island. Most of the Jews stayed until the late 1950s, when they moved on to the United States or Israel. As of 2010, the number of known Jews in Haiti is estimated at 25, residing in the relatively affluent suburb of Pétion-Ville, outside Port-au-Prince. Haiti and Israel maintain full diplomatic relations, but Israel's nearest permanent diplomat to the region is based in neighboring Dominican Republic.[citation needed] Honduras During the 20th century-1980s, Jewish immigrants came to Honduras, mainly from Russia, Poland, Germany, Hungary and Romania. There were also immigration from Greece, who are of Sephardic origin and Turkey and North Africa, who are of Mizrachi origin. Throughout the 1970s and 1980s, it has been absorbed a huge number of Jewish immigrants from Israel. Through the past two decades, the Honduras experienced a resurgence of Jewish life. Communities in Tegucigalpa and San Pedro Sula grew more active. In 1998, Hurricane Mitch destroyed the synagogue, which was part of the Jewish community center in the Honduras. But the Jewish community contributed money to re-build the temple. Most Honduran Jews live in Tegucigalpa. Jamaica The history of the Jews in Jamaica predominantly dates back to the 1490s when many Jews from Portugal and Spain fled the persecution of the Holy Inquisition. When the English captured the colony of Jamaica from Spain in 1655, Jews who were living as conversos began to practice Judaism openly. In 1719, the synagogue Kahal Kadosh Neve Tsedek in Port Royal was built. By the year 1720, 18 percent of the population the capital Kingston was Jewish. For the most part, Jews practiced Orthodox rituals and customs. A recent study has now estimated that nearly 424,000 Jamaicans are descendants of Jewish (Sephardic) immigrants to Jamaica from Portugal and Spain from 1494 to the present, either by birth or ancestry. Jewish documents, gravestones written in Hebrew and recent DNA testing have proven this. While many are non-practicing, it is recorded that over 20,000 Jamaicans religiously identify as Jews.[citation needed] Common Jewish surnames in Jamaica are Abrahams, Alexander, Isaacs, Levy, Marish, Lindo, Lyon, Sangster, Myers, Da Silva, De Souza, De Cohen, De Leon, DeMercado, Barrett, Babb, Magnus, Codner, Pimentel, DeCosta, Henriques and Rodriques.[citation needed] In 2006 Jamaican Jewish Heritage Center opened to celebrate of 350 years of Jews living in Jamaica. [citation needed] Mexico New Christians arrived in Mexico as early as 1521. Due to the strong Catholic Church presence in Mexico, few conversos and even fewer Jews migrated there after the Spanish Conquest of Mexico. Then, in the late 19th century, a number of German Jews settled in Mexico as a result of invitations from Maximilian I of Mexico, followed by a huge wave of Ashkenazic Jews fleeing pogroms in Russia and Eastern Europe. A second large wave of immigration occurred as the Ottoman Empire collapsed, leading many Sephardic Jews from Turkey, Morocco, and parts of France to flee. Finally, a wave of immigrants fled the increasing Nazi persecutions in Europe during World War II. According to the 2010 Census, there are 67,476 Jews in Mexico, making them the third largest Jewish community in Latin America. Based in Cancún, they reached out to the whole Quintana Roo and Mexican Caribbean including Playa del Carmen, Cozumel, Isla Mujeres and Mérida. In 2010 they opened a Chabad branch in Playa del Carmen to expand their activities. Rabbi Mendel Goldberg along with his wife Chaya and two daughters where assigned to direct the activities there and open a new center. The State of Baja California has also had a Jewish presence for the last few hundred years. La Paz, Mexico was home to many Jewish traders who would dock at the port and do business. Many locals in La Paz descend from the prominent Schcolnik, Tuschman and Habiff families, although most are assimilated into Mexican life. In recent years, the tourist industry has picked up in Baja California Sur, which saw many American retirees purchase and live in properties around the Baja. In 2009, with a grassroots Jewish Community formulating and with the help of Tijuana-based businessman Jose Galicot, Chabad sent out Rabbi Benny Hershcovich and his family to run the operations of the Cabo Jewish Center, located in Los Cabos, Mexico, but providing Jewish services and assistance to Jews scattered throughout the Baja Sur region, including La Paz, Todos Santos and the East Cape. Nicaragua In the 20th century, Nicaragua's Jewish community consisted mostly of immigrants from Eastern Europe who arrived in Nicaragua after 1929. The Jews in Nicaragua were a relatively small community, with most living in Managua. The Jews made significant contributions to Nicaragua's economic development while dedicating themselves to farming, manufacturing and retail sales. It was approximated that the highest number of Jews in Nicaragua reached a peak of 250 in 1972. Some 60 Jews left the country after the 1972 earthquake that devastated Managua, it having destroyed many Jewish businesses, while others fled during the violence and unrest of the 1978-1979 Sandinista Revolution. When Nicaraguan dictator Anastasio Somoza was deposed in 1979, almost all of the remaining Nicaraguan Jews left the country, concerned about their future under the incoming socialist government. Beginning in 1983, the Reagan administration in the U.S. made a concerted effort to increase domestic support for funding the Contras by persuading American Jews that the Sandinista government was antisemitic. According to Contra leader Edgar Chamorro, CIA officers told him of this plan in a 1983 meeting, justifying it with the antisemitic argument that Jews controlled the media and winning them over would be key to a public relations success. The Anti-Defamation League supported the Reagan administration's charges of Sandinista antisemitism, having actively worked with Nicaraguan Jews to reclaim a synagogue that had been firebombed by Sandinista militants in 1978 and seized by the Sandinista government in 1979. However, a variety of left-wing organizations that opposed the Reagan administration's policies in Latin America, including the progressive New Jewish Agenda, the leftist NGO the Council on Hemispheric Affairs, as well as the American Jewish Committee, all found that there was no evidence to support the U.S. charge of government antisemitism. Anthony Quainton, U.S. ambassador to Nicaragua, also reported no evidence of government antisemitism after an investigation by embassy staff. The dozens of Nicaraguan Jews who had fled the country supported the Reagan administration's charges of antisemitism, citing several instances of intimidation, harassment, and arbitrary arrest, but two of the Jews who remained in Nicaragua denied their accuracy, and they were widely cited in the media at the time. After Daniel Ortega lost the 1990 presidential election, Nicaraguan Jews started returning to Nicaragua. Prior to 1979 the Jewish community had no rabbi or mohel (circumcision practitioner). In 2005, the Jewish community numbered about 50 people and included 3 mohalim, but had no ordained rabbi. In 2017, there was a mass conversion of 114 Nicaraguans to Judaism. Panama The presence of Anusim or Crypto-Jews was recorded as early as the first migrations of Spaniards and Portuguese to the territory. Researcher and writer Elyjah Byrzdett explains that the Judeo-Converso phenomenon in Panama can be divided into two main periods: the Castilian period and the Portuguese period. This period was marked by the arrival of Crypto-Jews of Castilian origin, who played an active role in the colonization of the territory. When Rodrigo de Bastidas arrived on the Isthmus of Panama in 1501, he was accompanied by recent converts to Christianity. From the first Spanish expeditions and throughout the conquest, Judeo-Conversos were present in the region. The governor and founder of the city of Panama, Pedro Arias Dávila (known as Pedrarias), had Jewish ancestry on both his paternal and maternal sides. His paternal grandfather, Ysaque Abenazar, was an influential member of the Jewish community of Segovia, who later converted to Catholicism and adopted the name Diego Arias Dávila. Although his religious beliefs remain uncertain, it is established that he protected Judeo-Conversos from persecution led by Franciscan friar Juan de Quevedo. Other notable figures of Converso origin include the following captains and governors: In his work The Pisa Family: A Converso Lineage, Byrzdett documents the detailed genealogy of the Pisa family, whose descendants arrived in Panama before settling in other regions. Although not all Crypto-Jews bore the name "de Pisa," the author uses it as a reference due to its significance as a common lineage among several Converso families in the region. The Portuguese period began in 1580, following the dynastic union of Portugal with the Spanish Crown. During this period, Portuguese Crypto-Jews, who were better organized and had more resources, managed to establish a house of prayer on Calafates Street, located behind the old cathedral of Panama la Vieja. However, the Inquisition intensified its persecution against Judaizers, culminating in 1640 with an event known as the "Great Conspiracy," which dismantled much of the Crypto-Jewish network on the Isthmus. From then on, their presence in historical records became more difficult to trace, as fear of persecution led many to further conceal their identity. One of the most documented episodes of this persecution was the arrest of the Portuguese Sebastián Rodríguez, accused of being a Judaizer, meaning a practitioner of Judaism. Rodríguez led a group of Crypto-Jews, including Antonio de Ávila, González de Silva, Domingo de Almeyda, and a Mercedarian friar, all secretly practicing Judaism. During the trial, four doctors confirmed the presence of a circumcision mark on Rodríguez, which was used as evidence against him. When the isthmus joined Simón Bolívar’s federation project, a new wave of Jewish migration took place, revitalizing Mosaic faith in the region. These early Jewish immigrants arrived under a new policy that encouraged religious freedom in the newly independent territories. Thanks to their proficiency in languages such as German, Spanish, French, English, Dutch, and Papiamento, they played a crucial role as intermediaries and translators, facilitating communication between the local population and foreigners arriving in or passing through the region. Sephardic (Judeo-Spanish) mainly from nearby islands such as Curaçao, St. Thomas and Jamaica, and Jewish immigrants from Central and Eastern Europe began arriving in Panama in large quantities until the mid-nineteenth century, attracted by economic incentives such as bi-oceanic railway construction and the California gold rush. And Ashkenazi (Judeo-German) Jews began arriving in significant numbers in Panama in the mid-19th century, attracted by economic opportunities such as the construction of the interoceanic railroad and the California Gold Rush. This migratory flow marked an important chapter in the history of Panama’s Jewish community. The Republic of Panama, in its current form, would be significantly different without the notable contributions of the Panamanian Jewish community. Its role in the struggle for the country's independence in 1903 was crucial and prevented the failure of the separatist movement. Prominent members of the Kol Shearith Israel Congregation, such as Isaac Brandon, M.D. Cardoze, M.A. De León, Joshua Lindo, Morris Lindo, Joshua Piza, and Isaac L. Toledano, provided essential financial support to the Revolutionary Junta when the promised funds from Philippe Jean Bunau-Varilla failed to materialize. Without their contribution, the lives of the leaders responsible for Panama’s separation from Colombia could have been in jeopardy. For this reason, the commitment of the Jewish community was of vital importance at this historical moment for Panama. They were followed by other waves of immigration: during the First World War the Ottoman Empire from disintegrating, before and after the Second World War from Europe, from Arab countries because of the exodus caused in 1948 and more recently from South American countries suffering economic crises. The center of Jewish life in Panama is Panama City, although historically small groups of Jews settled in other cities, like Colón, David, Chitre, La Chorrera, Santiago de Veraguas and Bocas del Toro. Those communities are disappearing as families move to the capital in search of education for their children and for economic reasons. Today Jewish community numbers some 20,000. Panama is the only country in the world except for Israel that has had two Jewish presidents in the twentieth century. In the sixties Max Delvalle was first vice president, then president. His nephew, Eric Arturo Delvalle, was president between 1985 and 1988. The two were members of Kol Shearith Israel synagogue and were involved in Jewish life. Paraguay Toward the 19th century, Jewish immigrants arrived in Paraguay from countries such as France, Switzerland and Italy. During World War I Jews from Palestine (Jerusalem), Egypt and Turkey arrived in Paraguay, mostly Sephardic Jews. In the 1920s, there was a second wave of immigrants from Ukraine and Poland. Between 1933 and 1939, between 15,000 and 20,000 Jews from Germany, Austria and Czechoslovakia took advantage of Paraguay's liberal immigration laws to escape from Nazi-occupied Europe. After World War II, most Jews that arrived in Paraguay were survivors of concentration camps. Today, there are 1,000 Jews mostly living in Paraguay's capital, Asunción. Most are of German descent. Peru In Peru, conversos arrived at the time of the Spanish Conquest. At first, they had lived without restrictions because the Inquisition was not active in Peru at the beginning of the Viceroyalty. Then, with the advent of the Inquisition, New Christians began to be persecuted, and, in some cases, executed. In this period, these people were sometimes called "marranos", converts ("conversos"), and "cristianos nuevos" (New Christians) even if they had not been among the original converts from Judaism and had been reared as Catholics. The descendants of these Colonial Sephardic Jewish descent converts to Christianity settled mainly in the northern highlands and northern high jungle, and they were assimilated to local people: Cajamarca, the northern highlands of Piura as Ayabaca and Huancabamba, among others, due to cultural and ethnic contact with the southern highlands of Ecuador. In modern times, before and after the Second World War, some Ashkenazic Jews, Western and Eastern Slavic and Hungarians mainly, migrated to Peru, mostly to Lima. Today, Peruvian Jews represent an important part of the economics and politics of Peru; the majority of them are from the Ashkenazi community. Puerto Rico Puerto Rico is currently home to the largest Jewish community in the Caribbean, with over 3,000 Jews supporting four synagogues; three in the capital city of San Juan: one each Reform, Conservative and Chabad, as well as a Satmar community in the western part of the island in the town of Mayagüez known as Toiras Jesed for Minyanim information. Many Jews managed to settle in the island as secret Jews and settled in the island's remote mountainous interior as did the early Jews in all Spanish and Portuguese colonies. In the late 1800s during the Spanish–American War many Jewish American servicemen gathered together with local Puerto Rican Jews at the Old Telegraph building in Ponce to hold religious services. Many Central and Eastern European Jews came after World War II. Suriname Suriname has the oldest Jewish community in the Americas. During the Inquisition in Portugal and Spain around 1500, many Jews fled to the Netherlands and the Dutch colonies to escape social discrimination and inquisitorial persecution, sometimes including torture and condemnation to the stake. Those who were converted to the Catholic faith were called New Christians, conversos, and, less often, "Marranos". The stadtholder of the King of Portugal gave those who wanted to depart some time to let them settle, and supplied them with 16 ships and safe conduct to leave for the Netherlands. The Dutch government gave an opportunity to settle in Brazil. But most found their home in Recife, and merchants became cocoa growers. But the Portuguese in Brazil forced many Jews to move into the northern Dutch colonies in the Americas, The Guyanas. Jews settled in Suriname in 1639.[citation needed] Suriname was one of the most important centers of the Jewish population in the Western Hemisphere, and Jews there were planters and slaveholders. For a few years, when World War II arrived, many Jewish refugees from the Netherlands and other parts of Europe fled to Suriname. Today, 2,765 Jews live in Suriname.[citation needed] Trinidad and Tobago History of the Jews in Trinidad and Tobago redirects here. Trinidad and Tobago, a former British colony, is home to over 500 Jews. Uruguay Uruguay is home to the fifth-largest Jewish community in Latin America, but the largest as a proportion of the country's total population. Jewish presence began during the colonial era, with the arrival of conversos to the Banda Oriental, fleeing the Spanish Inquisition. However, considerable Jewish immigration began at the end of the 19th century with the arrival of some Sephardic Jews from neighboring countries, and spread during the first half of the 20th century with the arrival of a large number of Ashkenazim. By the first decades of the 20th century, the Jewish community had already set up an educational network, and its presence was notable in several areas of the capital Montevideo, such as the Villa Muñoz neighborhood, which became known as the city's Jewish quarter. In addition, Jews from Belarus and Bessarabia formed an agricultural community in the rural area of the Paysandú Department. Most of the Jewish immigration to Uruguay took place in the 1920s and 1930s, although in this latter period, there were some Fascist and liberal anti-immigration sectors that opposed all foreign immigration, weighing heavily on Jewish immigration. However, the country has traditionally been the destination of a large number of Jewish refugees during and after World War II. In 1940, the Central Israelite Committee of Uruguay was founded, uniting the different Jewish communities that had been formed based on the place of origin of the Jews who arrived in the country. It is estimated that between the 1950s and 1960s the Jewish community in Uruguay was made up of approximately 50,000 people. Venezuela The history of Venezuelan New Christians most likely began in the middle of the 17th century, when some records suggest that groups of conversos lived in Caracas and Maracaibo. At the turn of the 19th century, Venezuela and Colombia were fighting against their Spanish colonizers in wars of independence. Simón Bolívar, Venezuela's liberator and his sister, found refuge and material support for his army in the homes of Jews from Curaçao. After independence, in 1826 practicing Jews came from Curaçao to Santa Ana Coro, where they had flourished under Dutch rule. Judaism was recognized as a legal religion. The government granted the Jews land for a cemetery. According to a national census taken at the end of the 19th century, 247 Jews lived in Venezuela as citizens in 1891. In 1907, the Israelite Beneficial Society, which became the Israelite Society of Venezuela in 1919, was created as an organization to bring all the Jews who were scattered through various cities and towns throughout the country together. By 1943, nearly 600 German Jews had entered the country, with several hundred more becoming citizens after World War II. By 1950, the community had grown to around 6,000 people, even in the face of immigration restrictions. During the first decades of the 21st century, many Venezuelan Jews decided to emigrate due to the growth of antisemitism and to the political crisis and instability. Currently, there are around 10,000 Jews living in Venezuela, with more than half living in the capital Caracas. Venezuelan Jewry is split equally between Sephardim and Ashkenazim. All but one of the country's 15 synagogues are Orthodox. The majority of Venezuela's Jews are members of the middle class. The current president of Venezuela, Nicolas Maduro, claims to be of Sephardic Jewish descent. Jewish groups, such as the Latin American Jewish Congress, have criticized Maduro and his predecessor, Hugo Chavez, of fostering antisemitism. Reported Jewish populations in the Americas and the Caribbean in 2014 1 CIA World Factbook, with most estimates current as of July 2014; Jewish Virtual Library: Vital Statistics: Jewish Population of the World (1882 – Present). See also References Bibliography External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Printf] | [TOKENS: 2827]
Contents printf printf is a C standard library function and is also a Linux terminal (shell) command that formats text and writes it to standard output. The function accepts a format C-string argument and a variable number of value arguments that the function serializes per the format string. Mismatch between the format specifiers and count and type of values results in undefined behavior and possibly program crash or other vulnerability. The format string is encoded as a template language consisting of verbatim text and format specifiers that each specify how to serialize a value. As the format string is processed left-to-right, a subsequent value is used for each format specifier found. A format specifier starts with a % character and has one or more following characters that specify how to serialize a value. The standard library provides other, similar functions that form a family of printf-like functions. The functions share the same formatting capabilities but provide different behavior such as output to a different destination or safety measures that limit exposure to vulnerabilities. Functions of the printf-family have been implemented in other computer programming contexts (i.e., programming languages) with the same or similar syntax and semantics. The scanf() C standard library function complements printf by providing formatted input (a.k.a. lexing, a.k.a. parsing) via a similar format string syntax. The name, printf, is short for print formatted where print refers to output to a printer although the function is not limited to printer output. Today, print refers to output to any text-based environment such as a terminal or a file. History Early programming languages like Fortran used special statements with different syntax from other calculations to build formatting descriptions. In this example, the format is specified on line 601, and the PRINT[a] command refers to it by line number: Hereby: An output with input arguments 100, 200, and 1500.25 might look like this: In 1967, BCPL appeared. Its library included the writef routine. An example application looks like this: Hereby: In 1968, ALGOL 68 had a more function-like API, but still used special syntax (the $ delimiters surround special formatting syntax): In contrast to Fortran, using normal function calls and data types simplifies the language and compiler, and allows the implementation of the input/output (I/O) to be written in the same language. These advantages were thought to outweigh the disadvantages (such as no type safety in many instances) up until the 2000s, and in most newer languages of that era I/O is not part of the syntax. People have since learned that this potentially results in consequences, ranging from security exploits to hardware failures (e.g., phone's networking capabilities being permanently disabled after trying to connect to an access point named "%p%s%s%s%s%n"). Modern languages, such as C++20 and later, tend to include format specifications as a part of the language syntax, which restore type safety in formatting to an extent, and allow the compiler to detect some invalid combinations of format specifiers and data types at compile time. In 1973, printf was included as a C standard library routine as part of Version 4 Unix. In 1990, the printf shell command, modeled after the C standard library function, was included with 4.3BSD-Reno. In 1991, a printf command was included with GNU shellutils (now part of GNU Core Utilities) and the syntax (options, arguments, etc.) of this "shell command" are different from the C-Language function e.g.: the "format section" does not use the postional arguments with a "$" symbol (n$) in the same way as printf() function: In 2004, Java 5.0 (1.5) released, which extended the class java.io.PrintStream, adding a method named printf() which functions analogously to printf() in C. Thus to print a formatted string to the standard output stream, one uses System.out.printf(). Java further introduced the method format to its string class java.lang.String. The need to do something about the range of problems resulting from lack of type safety has prompted attempts to make the C++ compiler printf-aware. The -Wformat option of GNU Compiler Collection (GCC) allows compile time checks to printf calls, enabling the compiler to detect a subset of invalid calls (and issue either a warning or an error, terminating compiling, as set by other flags). Since the compiler is inspecting printf format specifiers, enabling this effectively extends the C++ syntax by making formatting a part of it. To address usability issues with the existing C++ input/output support, and to avoid safety issues of printf the C++ standard library was revised to support a new type-safe formatting starting with C++20. The approach of std::format() resulted from incorporating Victor Zverovich's fmtlib API into the language specification (Zverovich wrote the first draft of the new format proposal); consequently, fmtlib is an implementation of the C++20 format specification. In C++23, another function, std::print() and std::println(), was introduced that combines formatting and outputting and therefore is a functional replacement for printf(). However, no analogous ::scanf() modernization has been introduced, though one has been proposed based on scnlib. As the format specification has become a part of the language syntax, a C++ compiler is able to prevent invalid combinations of types and format specifiers in many cases. Unlike the -Wformat option, this is not an optional feature. The format specification of fmtlib and std::format() is, in itself, an extensible "mini-language" (referred to as such in the specification), an example of a domain-specific language. As such, std::print completes a historical cycle, bringing the state-of-the-art (as of 2024) back to what it was in the case of Fortran's first PRINT implementation in the 1950s. Format specifier Formatting of a value is specified as markup in the format string. For example, the following outputs Your age is and then the value of the variable age in decimal format. The syntax for a format specifier is: The parameter field is optional. If included, then matching specifiers to values is not sequential. The numeric value n selects the n-th value parameter. This is a POSIX extension, not C99.[citation needed] This field allows for using the same value multiple times in a format string instead of having to pass the value multiple times. If a specifier includes this field, then subsequent specifiers must also. For example, outputs: 17 0x11; 16 0x10 This field is very useful for localizing messages to different natural languages that use different word orders. In Windows API, support for this feature is via a different function, printf_p. The flags field can be zero or more of (in any order): The width field specifies the minimum number of characters to output. If the value can be represented in fewer characters, then the value is left-padded with spaces so that output is the number of characters specified. If the value requires more characters, then the output is longer than the specified width. A value is never truncated. For example, printf("%3d", 12); specifies a width of 3 and outputs 12 with a space on the left to output 3 characters. The call printf("%3d", 1234); outputs 1234 which is 4 characters long since that is the minimum width for that value even though the width specified is 3. If the width field is omitted, the output is the minimum number of characters for the value. If the field is specified as *, then the width value is read from the list of values in the call. For example, printf("%*d", 3, 10); outputs 10 (<space>10) where the second parameter, 3, is the width (matches with *) and 10 is the value to serialize (matches with d). Though not part of the width field, a leading zero is interpreted as the zero-padding flag mentioned above, and a negative value is treated as the positive value in conjunction with the left-alignment - flag also mentioned above. The width field can be used to format values as a table (tabulated output). But, columns do not align if any value is larger than fits in the width specified. For example, notice that the last line value (1234) does not fit in the first column of width 3 and therefore the column is not aligned. The precision field usually specifies a maximum limit of the output, as set by the formatting type. For floating-point numeric types, it specifies the number of digits to the right of the decimal point to which the output should be rounded; for %g and %G it specifies the total number of significant figures (before and after the decimal, not including leading or trailing zeroes) to round to. For the string type, it limits the number of characters that should be output, after which the string is truncated. The precision field may be omitted, or a numeric integer value, or a dynamic value when passed as another argument when indicated by an asterisk (*). For example, printf("%.*s", 3, "abcdef"); outputs abc. The length field can be omitted or be any of: For floating-point types, this is ignored. float arguments are always promoted to double when used in a varargs call. Platform-specific length options came to exist prior to widespread use of the ISO C99 extensions, including: ISO C99 includes the inttypes.h header file that includes a number of macros for cross-platform printf coding. For example: printf("%" PRId64, t); specifies decimal format for a 64-bit signed integer. Since the macros evaluate to a string literal, and the compiler concatenates adjacent string literals, the expression "%" PRId64 compiles to a single string. Macros include: The type field can be any of: A common way to handle formatting with a custom data type is to format the custom data type value into a string, then use the %s specifier to include the serialized value in a larger message. Some printf-like functions allow extensions to the escape-character-based mini-language, thus allowing the programmer to use a specific formatting function for non-builtin types. One is the (now deprecated) glibc's register_printf_function(). However, it is rarely used due to the fact that it conflicts with static format string checking. Another is Vstr custom formatters, which allows adding multi-character format names. Some applications (like the Apache HTTP Server) include their own printf-like function, and embed extensions into it. However these all tend to have the same problems that register_printf_function() has. The Linux kernel printk function supports a number of ways to display kernel structures using the generic %p specification, by appending additional format characters. For example, %pI4 prints an IPv4 address in dotted-decimal form. This allows static format string checking (of the %p portion) at the expense of full compatibility with normal printf. Vulnerabilities Extra value arguments are ignored, but if the format string has more format specifiers than value arguments passed, the behavior is undefined. For some C compilers, an extra format specifier results in consuming a value even though there isn't one which allows the format string attack. Generally, for C, arguments are passed on the stack. If too few arguments are passed, then printf can read past the end of the stack frame, thus allowing an attacker to read the stack. Some compilers, like the GNU Compiler Collection, will statically check the format strings of printf-like functions and warn about problems (when using the flags -Wall or -Wformat). GCC will also warn about user-defined printf-style functions if the non-standard "format" __attribute__ is applied to the function. The format string is often a string literal, which allows static analysis of the function call. However, the format string can be the value of a variable, which allows for dynamic formatting but also a security vulnerability known as an uncontrolled format string exploit. Although an output function on the surface, printf allows writing to a memory location specified by an argument via %n. This functioning is occasionally used as a part of more elaborate format-string attacks. The %n functioning also makes printf accidentally Turing-complete even with a well-formed set of arguments. A game of tic-tac-toe written in the format string is a winner of the 27th IOCCC. Related functions Variants of printf in the C standard library include: fprintf outputs to a file instead of standard output. sprintf writes to a string buffer instead of standard output. snprintf provides a level of safety over sprintf since the caller provides a length n that is the length of the output buffer in bytes (including space for the trailing null character). asprintf provides for safety by accepting a string handle (char**) argument. The function allocates a buffer of sufficient size to contain the formatted text and outputs the buffer via the handle. For each function of the family, including printf, there is also a variant that accepts a single va_list argument rather than a variable list of arguments. Typically, these variants start with "v". For example: vprintf, vfprintf, vsprintf. Generally, printf-like functions return the number of bytes output or -1 to indicate failure. The following list includes notable programming languages that provide (directly or via a standard library) functioning that is the same or similar to the C printf-like functions. Excluded are languages that use format strings that deviate from the style in this article (such as AMPL and Elixir), languages that inherit their implementation from the Java virtual machine (JVM) or other environment (such as Clojure and Scala), and languages that do not have a standard native printf implementation but have external libraries which emulate printf behavior (such as JavaScript). See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Com%C3%A9die-ballet] | [TOKENS: 770]
Contents Comédie-ballet Comédie-ballet is a genre of French drama which mixes a spoken play with interludes containing music and dance. History The first example of the genre is considered to be Les fâcheux, with words by Molière, performed in honour of Louis XIV at Vaux-le-Vicomte, the residence of Nicolas Fouquet, in 1661. The music and choreography were by Pierre Beauchamp, but Jean-Baptiste Lully later contributed a sung courante for Act I, scene 3. Molière, Lully and Beauchamp collaborated on several more examples of comédie-ballet, culminating in the masterpiece of the genre, Le Bourgeois gentilhomme, in 1670, and the scenically spectacular Psyché of January 1671, a tragicomédie et ballet which went well beyond the earlier examples of the genre. After quarrelling with Lully, Molière retained the services of Beauchamp as choreographer. His one-act prose comedy La Comtesse d'Escarbagnas premiered in December 1671 at the Château de Saint-Germain-en-Laye as part of a larger entertainment referred to as the "Ballet des Ballets". The play recycled musical episodes from several of Molière's earlier comédies-ballets, including La pastorale comique, George Dandin, Le Bourgeois gentilhomme, and Psyché. It "has sometimes been characterized as little more than a platform for songs and dances." Molière turned to the composer Marc-Antoine Charpentier for the music for Le Malade imaginaire in 1673. While performing in Le malade, Molière was taken ill on stage and died shortly afterwards. In the 18th century, the comédie-ballet became almost completely outmoded but it still exercised a long-lasting influence on the use of music in French theatre. A late example of a genuine comédie-ballet is La princesse de Navarre by Voltaire, which was performed at Versailles on 23 February 1745. It consisted of a prologue and three acts, with the addition of an overture and three musical divertissements, one per act, composed by Jean-Philippe Rameau. The vocal music is particularly difficult to sing and includes a virtuoso duet for hautes-contre. Comédie-ballet and comédie lyrique Even though scholars tend to limit the use of the term comédie-ballet to the form described above, in the 18th century some authors also applied it to other kinds of stage work, particularly a type of comic opera, usually in three or four acts, without spoken dialogue. This differed from opéra-ballet (another genre mixing opera and dance) in that it contained a continuous plot (rather than a different plot for each act) as well as frequently having comic or satirical elements. It was essentially the same as the comédie lyrique. Examples include Le carnaval et la folie (1703) by André Cardinal Destouches and La vénitienne (1768) by Antoine Dauvergne, a late reworking of the 1705 ballet of the same name by Michel de la Barre. A completely different use of the term comédie-lyrique as a sort of modern revival of the comédie-ballet is Le piège de Méduse (1913) by Erik Satie, which is a play in one act with seven short dances originally composed for the piano. List of comédies-ballets References Notes Sources Portions of this article are a translation of the equivalent page on the Italian Wikipedia
========================================
[SOURCE: https://en.wikipedia.org/wiki/Interpersonal_relationships] | [TOKENS: 5067]
Contents Interpersonal relationship In social psychology, an interpersonal relation (or interpersonal relationship) describes a social association, connection, or affiliation between two or more people. It overlaps significantly with the concept of social relations, which are the fundamental unit of analysis within the social sciences. Relations vary in degrees of intimacy, self-disclosure, duration, reciprocity, and power distribution. The main themes or trends of the interpersonal relations are: family, kinship, friendship, love, marriage, business, employment, clubs, neighborhoods, ethical values, support, and solidarity. Interpersonal relations may be regulated by law, custom, or mutual agreement, and form the basis of social groups and societies. They appear when people communicate or act with each other within specific social contexts, and they thrive on equitable and reciprocal compromises. Interdisciplinary analysis of relationships draws heavily upon the other social sciences, including, but not limited to: anthropology, communication, cultural studies, economics, linguistics, mathematics, political science, social work, and sociology. This scientific analysis had evolved during the 1990s and has become "relationship science", through the research done by Ellen Berscheid and Elaine Hatfield. This interdisciplinary science attempts to provide evidence-based conclusions through the use of data analysis. Types Friendships are non-sexual, non-romantic interpersonal relationships. Social relations can be homosocial such as female bonding and male bonding or heterosocial such as cross-sex friendship. In some countries a reduction of friendships is observed, also called friendship recession or loneliness epidemic. An intimate relationship is an interpersonal relationship that involves emotional or physical closeness between people and may include sexual intimacy and feelings of romance or love. Intimate relationships are interdependent, and the members of the relationship mutually influence each other. The quality and nature of the relationship depends on the interactions between individuals, and is derived from the unique context and history that builds between people over time. Romantic relationships have been defined in countless ways, by writers, philosophers, religions, scientists, and in the modern day, relationship counselors. Theories of love include Sternberg's triangular theory of love and biology of romantic love. Sternberg defines love in terms of intimacy, passion, and commitment, which he claims exist in varying levels in different romantic relationships. Romantic relationships can have great social and cultural variability, and may for example exist without any sexual intimacy, between people of any gender, or among a group of people, as in polyamory or open relationships. While many individuals recognize the single defining quality of a romantic relationship as the presence of love, it is impossible for romantic relationships to survive without the component of interpersonal communication. Within romantic relationships, love is therefore equally difficult to define. Hazan and Shaver define love, using Ainsworth's attachment theory, as comprising proximity, emotional support, self-exploration, and separation distress when parted from the loved one. Other components commonly associated with love are physical attraction, similarity, reciprocity, and self-disclosure. Early adolescent relationships are characterized by companionship, reciprocity, and sexual experiences. As emerging adults mature, they begin to develop attachment and caring qualities in their relationships, including love, bonding, security, and support for partners. Earlier relationships also tend to be shorter and exhibit greater involvement with social networks. Later relationships are often marked by shrinking social networks, as the couple dedicates more time to each other than to associates. Later relationships also tend to exhibit higher levels of commitment. Most psychologists and relationship counselors predict a decline of intimacy and passion over time, replaced by a greater emphasis on companionate love (differing from adolescent companionate love in the caring, committed, and partner-focused qualities). However, couple studies have found no decline in intimacy nor in the importance of sex, intimacy, and passionate love to those in longer or later-life relationships. Older people tend to be more satisfied in their relationships, but face greater barriers to entering new relationships than do younger or middle-aged people. Older women in particular face social, demographic, and personal barriers; men aged 65 and older are nearly twice as likely as women to be married, and widowers are nearly three times as likely to be dating 18 months following their partner's loss compared to widows. The term significant other gained popularity during the 1990s, reflecting the growing acceptance of 'non-heteronormative' relationships. It can be used to avoid making an assumption about the gender or relational status (e.g. married, cohabitating, civil union) of a person's intimate partner. Cohabiting relationships continue to rise, with many partners considering cohabitation to be nearly as serious as, or a substitute for, marriage. In particular, LGBTQ+ people often face unique challenges in establishing and maintaining intimate relationships. The strain of internalized discrimination, socially ingrained or homophobia, transphobia and other forms of discrimination against LGBTQ+ people, and social pressure of presenting themselves in line with socially acceptable gender norms can affect their health, quality of life, satisfaction, emotions etc. inside and outside their relationships. LGBTQ+ youth also lack the social support and peer connections enjoyed by hetero-normative young people. Nonetheless, comparative studies of homosexual and heterosexual couples have found few differences in relationship intensity, quality, satisfaction, or commitment. Although nontraditional relationships continue to rise, marriage still makes up the majority of relationships except among emerging adults. It is also still considered by many to occupy a place of greater importance among family and social structures. In ancient times, parent–child relationships were often marked by fear, either of rebellion or abandonment, resulting in the strict filial roles in, for example, ancient Rome and China. Freud conceived of the Oedipal complex, the supposed obsession that young boys have towards their mothers and the accompanying fear and rivalry with their fathers, and the Electra complex, in which the young girl feels that her mother has castrated her and therefore becomes obsessed with her father. Freud's ideas influenced thought on parent–child relationships for decades. Another early conception of parent–child relationships was that love only existed as a biological drive for survival and comfort on the child's part. In 1958, however, Harry Harlow's study "The Hot Wire Mother'' comparing rhesus' reactions to wire surrogate "mothers" and cloth "mothers" demonstrated that affection was wanted by any caregiver and not only the surrogate mothers. The study laid the groundwork for Mary Ainsworth's attachment theory, showing how the infants used their cloth "mothers" as a secure base from which to explore. In a series of studies using the strange situation, a scenario in which an infant is separated from then reunited with the parent, Ainsworth defined three styles of parent-child relationship. Secure attachments are linked to better social and academic outcomes and greater moral internalization as research proposes the idea that parent-child relationships play a key role in the developing morality of young children. Secure attachments are also linked to less delinquency for children, and have been found to predict later relationship success. For most of the late nineteenth through the twentieth century, the perception of adolescent-parent relationships was that of a time of upheaval. G. Stanley Hall popularized the "Sturm und drang", or storm and stress, model of adolescence. Psychological research has painted a much tamer picture. Although adolescents are more risk-seeking and emerging adults have higher suicide rates, they are largely less volatile and have much better relationships with their parents than the storm and stress model would suggest. Early adolescence often marks a decline in parent-child relationship quality, which then re-stabilizes through adolescence, and relationships are sometimes better in late adolescence than prior to its onset. With the increasing average age at marriage and more youths attending college and living with parents past their teens, the concept of a new period called emerging adulthood gained popularity. This is considered a period of uncertainty and experimentation between adolescence and adulthood. During this stage, interpersonal relationships are considered to be more self-focused, and relationships with parents may still be influential. Sibling relationships have a profound effect on social, psychological, emotional, and academic outcomes. Although proximity and contact usually decreases over time, sibling bonds continue to have effect throughout their lives. Sibling bonds are one of few enduring relationships humans may experience. Sibling relationships are affected by parent-child relationships, such that sibling relationships in childhood often reflect the positive or negative aspects of children's relationships with their parents. Business is generally held[by whom?] to be distinct from personal relations, a contrasting mode which other than excursions from the norm is based on non-personal interest and rational rather than emotional concerns.[citation needed] Stages Interpersonal relationships are dynamic systems that change continuously during their existence, and have a beginning, a lifespan, and an end. They tend to grow and improve gradually, as people become acquainted and become closer emotionally, or they gradually deteriorate as people drift apart, move on with their lives and form new relationships with others. One of the most influential[according to whom?] models of relationship development was proposed by psychologist George Levinger. This model was formulated to describe heterosexual, adult romantic relationships, but it has been applied to other kinds of interpersonal relations as well.[citation needed] According to the model, the natural development of a relationship follows five stages: Proximity: Proximity increases the chance of repeated exposure to the same person. Long-term exposure that can develop familiarity is more likely to trigger like or hate. Technological advance: The Internet removes the problem of lack of communication due to long distance. People can communicate with others who live far away from them through video calls or text. Internet is a medium for people to be close to others who are not physically near them. Similarity: People prefer to make friends with others who are similar to them because their thoughts and feelings are more likely to be understood. According to the latest Systematic Review of the Economic Literature on the Factors associated with Life Satisfaction (dating from 2007), stable and secure relationships are beneficial, and correspondingly, relationship dissolution is harmful. The American Psychological Association has summarized the evidence on breakups. Breaking up can actually be a positive experience when the relationship did not expand the self and when the breakup leads to personal growth. They also recommend some ways to cope with the experience: Less time between a breakup and a subsequent relationship predicts higher self-esteem, attachment security, emotional stability, respect for your new partner, and greater well-being. Furthermore, rebound relationships do not last any shorter than regular relationships. 60% of people are friends with one or more ex. 60% of people have had an off-and-on relationship. 37% of cohabiting couples, and 23% of the married, have broken up and gotten back together with their existing partner. Terminating a marital relationship implies divorce or annulment. One reason cited for divorce is infidelity. The determinants of unfaithfulness are debated by dating service providers, feminists, academics, and science communicators. According to Psychology Today, women's, rather than men's, level of commitment more strongly determines if a relationship will continue. Pathological relationships Abusive relationships involve either maltreatment or violence such as physical abuse, physical neglect, sexual abuse, and emotional maltreatment. Abusive relationships within the family are very prevalent in the United States and usually involve women or children as victims. Common individual factors for abusers include low self-esteem, poor impulse control, external locus of control, drug use, alcohol abuse, and negative affectivity. There are also external factors such as stress, poverty, and loss which contribute to likelihood of abuse. Codependency initially focused on a codependent partner enabling substance abuse, but it has become more broadly defined to describe a dysfunctional relationship with one or both partners having extreme dependence on or preoccupation with the relationship. For example, a narcissist and a sycophant with abandonment issues may create codependency as the narcissist values the flattery from the sycophant and the sycophant values the narcissist's approval. There are some who even refer to codependency as an addiction to the relationship. The focus of codependents tends to be on the emotional state, behavioral choices, thoughts, and beliefs of another person. Often those who are codependent neglect themselves in favor of taking care of others and have difficulty fully developing an identity of their own. Narcissists focus on themselves and often distance themselves from intimate relationships; the focus of narcissistic interpersonal relationships is to promote one's self-concept. Generally, narcissists show less empathy in relationships and view love pragmatically or as a game involving others' emotions. Narcissists are usually part of the personality disorder, narcissistic personality disorder (NPD). In relationships, they tend to affect the other person as they attempt to use them to enhance their self-esteem. Specific types of NPD make a person incapable of having an interpersonal relationship due to their being cunning, envious, and contemptuous. Importance Human beings are innately social and are shaped by their experiences with others. There are multiple perspectives to understand this inherent motivation to interact with others. According to Maslow's hierarchy of needs, humans need to feel love (sexual/nonsexual) and acceptance from social groups (family, peer groups). In fact, the need to belong is so innately ingrained that it may be strong enough to overcome physiological and safety needs, such as children's attachment to abusive parents or staying in abusive romantic relationships. Such examples illustrate the extent to which the psychobiological drive to belong is entrenched. Another way to appreciate the importance of relationships is in terms of a reward framework. This perspective suggests that individuals engage in relations that are rewarding in both tangible and intangible ways. The concept fits into a larger theory of social exchange. This theory is based on the idea that relationships develop as a result of cost–benefit analysis. Individuals seek out rewards in interactions with others and are willing to pay a cost for said rewards. In the best-case scenario, rewards will exceed costs, producing a net gain. This can lead to "shopping around" or constantly comparing alternatives to maximize the benefits or rewards while minimizing costs. Relationships are also important for their ability to help individuals develop a sense of self. The relational self is the part of an individual's self-concept that consists of the feelings and beliefs that one has regarding oneself that develops based on interactions with others. In other words, one's emotions and behaviors are shaped by prior relationships. Relational self theory posits that prior and existing relationships influence one's emotions and behaviors in interactions with new individuals, particularly those individuals that remind them of others in their life. Studies have shown that exposure to someone who resembles a significant other activates specific self-beliefs, changing how one thinks about oneself in the moment more so than exposure to someone who does not resemble one's significant other. Power and dominance Power is the ability to influence the behavior of other people. When two parties have or assert unequal levels of power, one is termed "dominant" and the other "submissive". Expressions of dominance can communicate an intention to assert or maintain dominance in a relationship. Being submissive can be beneficial because it saves time, limits emotional stress, and may avoid hostile actions such as withholding of resources, cessation of cooperation, termination of the relationship, maintaining a grudge, or even physical violence. Submission occurs in different degrees; for example, some employees may follow orders without question, whereas others might express disagreement but concede when pressed. Groups of people can form a dominance hierarchy. For example, a hierarchical organization uses a command hierarchy for top-down management. This can reduce time wasted in conflict over unimportant decisions, prevents inconsistent decisions from harming the operations of the organization, maintain alignment of a large population of workers with the goals of the owners (which the workers might not personally share) and, if promotion is based on merit, help ensure that the people with the best expertise make important decisions. This contrasts with group decision-making and systems which encourage decision-making and self-organization by front-line employees, who in some cases may have better information about customer needs or how to work efficiently. Dominance is only one aspect of organizational structure. A power structure describes power and dominance relationships in a larger society. For example, a feudal society under a monarchy exhibits a strong dominance hierarchy in both economics and physical power, whereas dominance relationships in a society with democracy and capitalism are more complicated. In business relationships, dominance is often associated with economic power. For example, a business may adopt a submissive attitude to customer preferences (stocking what customers want to buy) and complaints ("the customer is always right") in order to earn more money. A firm with monopoly power may be less responsive to customer complaints because it can afford to adopt a dominant position. In a business partnership a "silent partner" is one who adopts a submissive position in all aspects, but retains financial ownership and a share of the profits. Two parties can be dominant in different areas. For example, in a friendship or romantic relationship, one person may have strong opinions about where to eat dinner, whereas the other has strong opinions about how to decorate a shared space. It could be beneficial for the party with weak preferences to be submissive in that area because it will not make them unhappy and avoids conflict with the party that would be unhappy. The breadwinner model is associated with gender role assignments where the male in a heterosexual marriage would be dominant as they are responsible for economic provision. Relationship quality Relationship quality refers to the perceived quality of a close relationship (i.e., romantic relationship, friendship, or family). Relationship quality (sometimes used interchangeably with relationship satisfaction, relationship flourishing, or relationship happiness), in the context of close interpersonal relationships is generally defined as a reflection of a couple's overall feelings towards their relationship. More simply, it is the extent to which members in a relationship (romantic or otherwise) view their relationship as positive or negative. The determinant of relationship quality is often a variety of self-reported evaluations of traits that make up relationship quality. For instance, feelings of closeness may be measured via questions that ask an individual to rate the extent to which they identify with statements. I.e., "I feel close to my partner", "I am comfortable sharing personal thoughts and feelings with my partner", etc. These questions are typically asked on a Likert scale and the average of those scores represents an individual's feelings of closeness toward their partner. Some scales are considered unidimensional and attempt to directly measure the construct of relationship quality. Other scales, considered multidimensional, repeat this process for other hypothesized components (e.g., closeness and satisfaction) before aggregating dimensions into a representative "relationship quality" score. Historically, relationship quality has been the most commonly studied in the context of intimate romantic relationships. More recently, the study of relationship quality has extended to include other types of close relationships (see: friendships, family, sibling, parent). There is not always agreement among scholars about what domains should be included in the measurement of relationship quality, even within the different types of close relationships. Despite this, relationship quality and its predictors have been of popular interest to relationship scholars due to the range of psychological and relational outcomes that high quality relationships have been positively linked and associated with. Relationship satisfaction Social exchange theory and Rusbult's investment model show that relationship satisfaction is based on three factors: rewards, costs, and comparison levels (Miller, 2012). Rewards refer to any aspects of the partner or relationship that are positive. Conversely, costs are the negative or unpleasant aspects of the partner or their relationship. The comparison level includes what each partner expects of the relationship. The comparison level is influenced by past relationships, and general relationship expectations they are taught by family and friends. Individuals in long-distance relationships, LDRs, rated their relationships as more satisfying than individuals in proximal relationship, PRs. Alternatively, Holt and Stone (1988) found that long-distance couples who were able to meet with their partner at least once a month had similar satisfaction levels to unmarried couples who cohabitated. Also, the relationship satisfaction was lower for members of LDRs who saw their partner less frequently than once a month. LDR couples reported the same level of relationship satisfaction as couples in PRs, despite only seeing each other on average once every 23 days. Social exchange theory and the am investment model both theorize that relationships that are high in cost would be less satisfying than relationships that are low in cost. LDRs have a higher level of costs than PRs, therefore, one would assume that LDRs are less satisfying than PRs. Individuals in LDRs are more satisfied with their relationships compared to individuals in PRs. This can be explained by unique aspects of the LDRs, how the individuals use relationship maintenance behaviors, and the attachment styles of the individuals in the relationships. Therefore, the costs and benefits of the relationship are subjective to the individual, and people in LDRs tend to report lower costs and higher rewards in their relationship compared to PRs. Confucianism is a study and theory of relationships, especially within hierarchies. Social harmony—the central goal of Confucianism—results in part from every individual knowing their place in the social order and playing their part well. Particular duties arise from each person's particular situation in relation to others. The individual stands simultaneously in several different relationships with different people: as a junior in relation to parents and elders; and as a senior in relation to younger siblings, students, and others. Juniors are considered in Confucianism to owe their seniors reverence and seniors have duties of benevolence and concern toward juniors. A focus on mutuality is prevalent in East Asian cultures to this day. The mindfulness theory of relationships shows how closeness in relationships may be enhanced. Minding is the "reciprocal knowing process involving the nonstop, interrelated thoughts, feelings, and behaviors of persons in a relationship." Five components of "minding" include: In popular culture Popular perceptions of intimate relationships are strongly influenced by movies and television. Common messages are that love is predestined, love at first sight is possible, and that love with the right person always succeeds. Those who consume the most romance-related media tend to believe in predestined romance and that those who are destined to be together implicitly understand each other. These beliefs, however, can lead to less communication and problem-solving as well as giving up on relationships more easily when conflict is encountered. Social media has changed the face of interpersonal relationships. Romantic interpersonal relationships are no less impacted. For example, in the United States, Facebook has become an integral part of the dating process for emerging adults. Social media can have both positive and negative impacts on romantic relationships. For example, supportive social networks have been linked to more stable relationships. However, social media usage can also facilitate conflict, jealousy, and passive-aggressive behaviors such as spying on a partner. Aside from direct effects on the development, maintenance, and perception of romantic relationships, excessive social network usage is linked to jealousy and dissatisfaction in relationships. Another common behavior in online communities, including dating, is lurking. Lurking means watching communities without posting or replying. Some say it means never posting, while others say even small posts count (Neelen & Fetter, 2010; Golder & Donath, 2004). Lurking is often seen negatively, tied to "freeloading" (Preece, Nonnecke, & Andrews, 2004), but others see it as valid participation. Lurking depends on personal goals, personality, and group dynamics. For example, introverts tend to lurk more, and people with higher tech confidence participate more (Ross et al., 2009; Sun, Rau, & Ma, 2014). Studies show up to 90% of users may lurk at some point (Muller, 2012). Lurking can help gather info or support without interacting, but it can make a community seem less active, Lurking can make online relationships feel one-sided because it reduces interaction. When someone just watches without joining in, it can create distance and make it harder to build trust. This can slow down the growth of a real connection. Understanding lurking can help encourage more participation (Yeow, Johnson, & Faraj, 2006). A growing segment of the population is engaging in purely online dating, sometimes but not always moving towards traditional face-to-face interactions. These online relationships differ from face-to-face relationships; for example, self-disclosure may be of primary importance in developing an online relationship. Conflict management differs, since avoidance is easier and conflict resolution skills may not develop in the same way. Additionally, the definition of infidelity is both broadened and narrowed, since physical infidelity becomes easier to conceal but emotional infidelity (e.g. chatting with more than one online partner) becomes a more serious offense. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Global_digital_divide] | [TOKENS: 2880]
Contents Global digital divide The global digital divide describes global disparities, primarily between developed and developing countries, in regards to access to computing and information resources such as the Internet and the opportunities derived from such access. The Internet is expanding very quickly, and not all countries—especially developing countries—can keep up with the constant changes.[example needed] The term "digital divide" does not necessarily mean that someone does not have technology; it could mean that there is simply a difference in technology. These differences can refer to, for example, high-quality computers, fast Internet, technical assistance, or telephone services. Statistics There is a large inequality worldwide in terms of the distribution of installed telecommunication bandwidth. In 2014 only three countries (China, US, Japan) host 50% of the globally installed bandwidth potential (see pie-chart Figure on the right). This concentration is not new, as historically only ten countries have hosted 70–75% of the global telecommunication capacity (see Figure). The U.S. lost its global leadership in terms of installed bandwidth in 2011, being replaced by China, which hosts more than twice as much national bandwidth potential in 2014 (29% versus 13% of the global total). Versus the digital divide The global digital divide is a special case of the digital divide; the focus is set on the fact that "Internet has developed unevenly throughout the world": 681 causing some countries to fall behind in technology, education, labor, democracy, and tourism. The concept of the digital divide was originally popularized regarding the disparity in Internet access between rural and urban areas of the United States of America; the global digital divide mirrors this disparity on an international scale. The global digital divide also contributes to the inequality of access to goods and services available through technology. Computers and the Internet provide users with improved education, which can lead to higher wages; the people living in nations with limited access are therefore disadvantaged. This global divide is often characterized as falling along what is sometimes called the North–South divide of "northern" wealthier nations and "southern" poorer ones. Obstacles to a solution Some people argue that necessities need to be considered before achieving digital inclusion, such as an ample food supply and quality health care. Minimizing the global digital divide requires considering and addressing the following types of access: Involves "the distribution of ICT devices per capita…and land lines per thousands".: 306 Individuals need to obtain access to computers, landlines, and networks in order to access the Internet. This access barrier is also addressed in Article 21 of the convention on the Rights of Persons with Disabilities by the United Nations. The cost of ICT devices, traffic, applications, technician and educator training, software, maintenance, and infrastructures require ongoing financial means. Financial access and "the levels of household income play a significant role in widening the gap". Empirical tests have identified that several socio-demographic characteristics foster or limit ICT access and usage. Among different countries, educational levels and income are the most powerful explanatory variables, with age being a third one. While a Global Gender Gap in access and usage of ICT's exist, empirical evidence shows that this is due to unfavorable conditions concerning employment, education and income and not to technophobia or lower ability. In the contexts understudy, women with the prerequisites for access and usage turned out to be more active users of digital tools than men. In the US, for example, the figures for 2018 show 89% of men and 88% of women use the Internet. In order to use computer technology, a certain level of information literacy is needed. Further challenges include information overload and the ability to find and use reliable information. Computers need to be accessible to individuals with different learning and physical abilities including complying with Section 508 of the Rehabilitation Act as amended by the Workforce Investment Act of 1998 in the United States. In illustrating institutional access, Wilson states "the numbers of users are greatly affected by whether access is offered only through individual homes or whether it is offered through schools, community centers, religious institutions, cybercafés, or post offices, especially in poor countries where computer access at work or home is highly limited".: 303 Guillen & Suarez argue that "democratic political regimes enable faster growth of the Internet than authoritarian or totalitarian regimes.": 687 The Internet is considered a form of e-democracy, and attempting to control what citizens can or cannot view is in contradiction to this. Recently situations in Iran and China have denied people the ability to access certain websites and disseminate information. Iran has prohibited the use of high-speed Internet in the country and has removed many satellite dishes in order to prevent the influence of Western culture, such as music and television. Many experts claim that bridging the digital divide is not sufficient and that the images and language needed to be conveyed in a language and images that can be read across different cultural lines. A 2013 study conducted by Pew Research Center noted how participants taking the survey in Spanish were nearly twice as likely not to use the internet. Examples In the early 21st century, residents of developed countries enjoy many Internet services which are not yet widely available in developing countries, including: Proposed remedies There are four specific arguments why it is important to "bridge the gap": While these four arguments are meant to lead to a solution to the digital divide, there are a couple of other components that need to be considered. The first one is rural living versus suburban living. Rural areas used to have very minimal access to the Internet, for example. However, nowadays, power lines and satellites are used to increase the availability in these areas. Another component to keep in mind is disabilities. Some people may have the highest quality technologies, but a disability they have may keep them from using these technologies to their fullest extent. Using previous studies (Gamos, 2003; Nsengiyuma & Stork, 2005; Harwit, 2004 as cited in James), James asserts that in developing countries, "internet use has taken place overwhelmingly among the upper-income, educated, and urban segments" largely due to the high literacy rates of this sector of the population.: 58 As such, James suggests that part of the solution requires that developing countries first build up the literacy/language skills, computer literacy, and technical competence that low-income and rural populations need in order to make use of ICT. It has also been suggested that there is a correlation between democrat regimes and the growth of the Internet. One hypothesis by Gullen is, "The more democratic the polity, the greater the Internet use...The government can try to control the Internet by monopolizing control" and Norris et al. also contends, "If there is less government control of it, the Internet flourishes, and it is associated with greater democracy and civil liberties. From an economic perspective, Pick and Azari state that "in developing nations…foreign direct investment (FDI), primary education, educational investment, access to education, and government prioritization of ICT as all-important".: 112 Specific remedies proposed by the study include: "invest in stimulating, attracting, and growing creative technical and scientific workforce; increase the access to education and digital literacy; reduce the gender divide and empower women to participate in the ICT workforce; emphasize investing in intensive Research and Development for selected metropolitan areas and regions within nations".: 111 There are projects worldwide that have implemented, to various degrees, the remedies outlined above. Many such projects have taken the form of Information Communications Technology Centers (ICT centers). Rahnman explains that "the main role of ICT intermediaries is defined as an organization providing effective support to local communities in the use and adaptation of technology. Most commonly, an ICT intermediary will be a specialized organization from outside the community, such as a non-governmental organization, local government, or international donor. On the other hand, a social intermediary is defined as a local institution from within the community, such as a community-based organization.: 128 Other proposed remedies that the Internet promises for developing countries are the provision of efficient communications within and among developing countries so that citizens worldwide can effectively help each other to solve their problems. Grameen Banks and Kiva loans are two microcredit systems designed to help citizens worldwide to contribute online towards entrepreneurship in developing communities. Economic opportunities range from entrepreneurs who can afford the hardware and broadband access required to maintain Internet cafés to agribusinesses having control over the seeds they plant. At the Massachusetts Institute of Technology, the IMARA organization (from Swahili word for "power") sponsors a variety of outreach programs which bridge the Global Digital Divide. Its aim is to find and implement long-term, sustainable remedies which will increase the availability of educational technology and resources to domestic and international communities. These projects are run under the aegis of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and staffed by MIT volunteers who give training, install and donate computer setups in greater Boston, Massachusetts, Kenya, Indian reservations the American Southwest such as the Navajo Nation, the Middle East, and the Fiji Islands. The CommuniTech project strives to empower underserved communities through sustainable technology and education. According to Dominik Hartmann of the MIT's Media Lab, interdisciplinary approaches are needed to bridge the global digital divide. Building on the premise that any effective solution must be decentralized, allowing the local communities in developing nations to generate their content, one scholar has posited that social media—like Facebook, YouTube, and Twitter—may be useful tools in closing the divide. As Amir Hatem Ali suggests, "the popularity and generative nature of social media empower individuals to combat some of the main obstacles to bridging the digital divide".: 188 Facebook's statistics reinforce this claim. According to Facebook, more than seventy-five percent of its users reside outside of the US. Moreover, more than seventy languages are presented on its website. The reasons for the high number of international users are due to many the qualities of Facebook and other social media. Amongst them, are its ability to offer a means of interacting with others, user-friendly features, and the fact that most sites are available at no cost. The problem with social media, however, is that it can be accessible, provided that there is physical access. Nevertheless, with its ability to encourage digital inclusion, social media can be used as a tool to bridge the global digital divide. Some cities in the world have started programs to bridge the digital divide for their residents, school children, students, parents and the elderly. One such program, founded in 1996, was sponsored by the city of Boston and called the Boston Digital Bridge Foundation. It especially concentrates on school children and their parents, helping to make both equally and similarly knowledgeable about computers, using application programs, and navigating the Internet. Free Basics is a partnership between social networking services company Facebook and six companies (Samsung, Ericsson, MediaTek, Opera Software, Nokia and Qualcomm) that plans to bring affordable access to selected Internet services to less developed countries by increasing efficiency, and facilitating the development of new business models around the provision of Internet access. In the whitepaper realised by Facebook's founder and CEO Mark Zuckerberg, connectivity is asserted as a "human right", and Internet.org is created to improve Internet access for people around the world. "Free Basics provides people with access to useful services on their mobile phones in markets where internet access may be less affordable. The websites are available for free without data charges, and include content about news, employment, health, education and local information etc. By introducing people to the benefits of the internet through these websites, we hope to bring more people online and help improve their lives." However, Free Basics is also accused of violating net neutrality for limiting access to handpicked services. Despite a wide deployment in numerous countries, it has been met with heavy resistance notably in India where the Telecom Regulatory Authority of India eventually banned it in 2016. Several projects to bring internet to the entire world with a satellite constellation have been devised in the last decade, one of these being Starlink by Elon Musk's company SpaceX. Unlike Free Basics, it would provide people with a full internet access and would not be limited to a few selected services. In the same week Starlink was announced, serial-entrepreneur Richard Branson announced his own project OneWeb, a similar constellation with approximately 700 satellites that was already procured communication frequency licenses for their broadcast spectrum and could possibly be operational on 2020. The biggest hurdle to these projects is the astronomical, financial, and logistical cost of launching so many satellites. After the failure of previous satellite-to-consumer space ventures, satellite industry consultant Roger Rusch said "It's highly unlikely that you can make a successful business out of this." Musk has publicly acknowledged this business reality, and indicated in mid-2015 that while endeavoring to develop this technically-complicated space-based communication system he wants to avoid overextending the company and stated that they are being measured in the pace of development. As of 2023, Starlink is being actively deployed with the goal to clear licensure hurdles in every country open to its services. One Laptop per Child (OLPC) was an attempt by an American non-profit to narrow the digital divide. This organization, founded in 2005, provided inexpensively produced "XO" laptops (dubbed the "$100 laptop", though actual production costs vary) to children residing in poor and isolated regions within developing countries. Each laptop belonged to an individual child and provides a gateway to digital learning and Internet access. The XO laptops were designed to withstand more abuse than higher-end machines, and they contained features in context to the unique conditions that remote villages present. Each laptop was constructed to use as little power as possible, had a sunlight-readable screen, and was capable of automatically networking with other XO laptops in order to access the Internet—as many as 500 machines can share a single point of access. The project went defunct in 2014. World Summit on the Information Society Several of the 67 principles adopted at the World Summit on the Information Society convened by the United Nations in Geneva in 2003 directly address the digital divide. See also References Bibliography External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Social_network#cite_note-34] | [TOKENS: 5247]
Contents Social network 1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias A social network is a social structure consisting of a set of social actors (such as individuals or organizations), networks of dyadic ties, and other social interactions between actors. The social network perspective provides a set of methods for analyzing the structure of whole social entities along with a variety of theories explaining the patterns observed in these structures. The study of these structures uses social network analysis to identify local and global patterns, locate influential entities, and examine dynamics of networks. For instance, social network analysis has been used in studying the spread of misinformation on social media platforms or analyzing the influence of key figures in social networks. Social networks and the analysis of them is an inherently interdisciplinary academic field which emerged from social psychology, sociology, statistics, and graph theory. Georg Simmel authored early structural theories in sociology emphasizing the dynamics of triads and "web of group affiliations". Jacob Moreno is credited with developing the first sociograms in the 1930s to study interpersonal relationships. These approaches were mathematically formalized in the 1950s and theories and methods of social networks became pervasive in the social and behavioral sciences by the 1980s. Social network analysis is now one of the major paradigms in contemporary sociology, and is also employed in a number of other social and formal sciences. Together with other complex networks, it forms part of the nascent field of network science. Overview The social network is a theoretical construct useful in the social sciences to study relationships between individuals, groups, organizations, or even entire societies (social units, see differentiation). The term is used to describe a social structure determined by such interactions. The ties through which any given social unit connects represent the convergence of the various social contacts of that unit. This theoretical approach is, necessarily, relational. An axiom of the social network approach to understanding social interaction is that social phenomena should be primarily conceived and investigated through the properties of relations between and within units, instead of the properties of these units themselves. Thus, one common criticism of social network theory is that individual agency is often ignored although this may not be the case in practice (see agent-based modeling). Precisely because many different types of relations, singular or in combination, form these network configurations, network analytics are useful to a broad range of research enterprises. In social science, these fields of study include, but are not limited to anthropology, biology, communication studies, economics, geography, information science, organizational studies, social psychology, sociology, and sociolinguistics. History In the late 1890s, both Émile Durkheim and Ferdinand Tönnies foreshadowed the idea of social networks in their theories and research of social groups. Tönnies argued that social groups can exist as personal and direct social ties that either link individuals who share values and belief (Gemeinschaft, German, commonly translated as "community") or impersonal, formal, and instrumental social links (Gesellschaft, German, commonly translated as "society"). Durkheim gave a non-individualistic explanation of social facts, arguing that social phenomena arise when interacting individuals constitute a reality that can no longer be accounted for in terms of the properties of individual actors. Georg Simmel, writing at the turn of the twentieth century, pointed to the nature of networks and the effect of network size on interaction and examined the likelihood of interaction in loosely knit networks rather than groups. Major developments in the field can be seen in the 1930s by several groups in psychology, anthropology, and mathematics working independently. In psychology, in the 1930s, Jacob L. Moreno began systematic recording and analysis of social interaction in small groups, especially classrooms and work groups (see sociometry). In anthropology, the foundation for social network theory is the theoretical and ethnographic work of Bronislaw Malinowski, Alfred Radcliffe-Brown, and Claude Lévi-Strauss. A group of social anthropologists associated with Max Gluckman and the Manchester School, including John A. Barnes, J. Clyde Mitchell and Elizabeth Bott Spillius, often are credited with performing some of the first fieldwork from which network analyses were performed, investigating community networks in southern Africa, India and the United Kingdom. Concomitantly, British anthropologist S. F. Nadel codified a theory of social structure that was influential in later network analysis. In sociology, the early (1930s) work of Talcott Parsons set the stage for taking a relational approach to understanding social structure. Later, drawing upon Parsons' theory, the work of sociologist Peter Blau provides a strong impetus for analyzing the relational ties of social units with his work on social exchange theory. By the 1970s, a growing number of scholars worked to combine the different tracks and traditions. One group consisted of sociologist Harrison White and his students at the Harvard University Department of Social Relations. Also independently active in the Harvard Social Relations department at the time were Charles Tilly, who focused on networks in political and community sociology and social movements, and Stanley Milgram, who developed the "six degrees of separation" thesis. Mark Granovetter and Barry Wellman are among the former students of White who elaborated and championed the analysis of social networks. Beginning in the late 1990s, social network analysis experienced work by sociologists, political scientists, and physicists such as Duncan J. Watts, Albert-László Barabási, Peter Bearman, Nicholas A. Christakis, James H. Fowler, and others, developing and applying new models and methods to emerging data available about online social networks, as well as "digital traces" regarding face-to-face networks. Levels of analysis In general, social networks are self-organizing, emergent, and complex, such that a globally coherent pattern appears from the local interaction of the elements that make up the system. These patterns become more apparent as network size increases. However, a global network analysis of, for example, all interpersonal relationships in the world is not feasible and is likely to contain so much information as to be uninformative. Practical limitations of computing power, ethics and participant recruitment and payment also limit the scope of a social network analysis. The nuances of a local system may be lost in a large network analysis, hence the quality of information may be more important than its scale for understanding network properties. Thus, social networks are analyzed at the scale relevant to the researcher's theoretical question. Although levels of analysis are not necessarily mutually exclusive, there are three general levels into which networks may fall: micro-level, meso-level, and macro-level. At the micro-level, social network research typically begins with an individual, snowballing as social relationships are traced, or may begin with a small group of individuals in a particular social context. Dyadic level: A dyad is a social relationship between two individuals. Network research on dyads may concentrate on structure of the relationship (e.g. multiplexity, strength), social equality, and tendencies toward reciprocity/mutuality. Triadic level: Add one individual to a dyad, and you have a triad. Research at this level may concentrate on factors such as balance and transitivity, as well as social equality and tendencies toward reciprocity/mutuality. In the balance theory of Fritz Heider the triad is the key to social dynamics. The discord in a rivalrous love triangle is an example of an unbalanced triad, likely to change to a balanced triad by a change in one of the relations. The dynamics of social friendships in society has been modeled by balancing triads. The study is carried forward with the theory of signed graphs. Actor level: The smallest unit of analysis in a social network is an individual in their social setting, i.e., an "actor" or "ego." Egonetwork analysis focuses on network characteristics, such as size, relationship strength, density, centrality, prestige and roles such as isolates, liaisons, and bridges. Such analyses, are most commonly used in the fields of psychology or social psychology, ethnographic kinship analysis or other genealogical studies of relationships between individuals. Subset level: Subset levels of network research problems begin at the micro-level, but may cross over into the meso-level of analysis. Subset level research may focus on distance and reachability, cliques, cohesive subgroups, or other group actions or behavior. In general, meso-level theories begin with a population size that falls between the micro- and macro-levels. However, meso-level may also refer to analyses that are specifically designed to reveal connections between micro- and macro-levels. Meso-level networks are low density and may exhibit causal processes distinct from interpersonal micro-level networks. Organizations: Formal organizations are social groups that distribute tasks for a collective goal. Network research on organizations may focus on either intra-organizational or inter-organizational ties in terms of formal or informal relationships. Intra-organizational networks themselves often contain multiple levels of analysis, especially in larger organizations with multiple branches, franchises or semi-autonomous departments. In these cases, research is often conducted at a work group level and organization level, focusing on the interplay between the two structures. Experiments with networked groups online have documented ways to optimize group-level coordination through diverse interventions, including the addition of autonomous agents to the groups. Randomly distributed networks: Exponential random graph models of social networks became state-of-the-art methods of social network analysis in the 1980s. This framework has the capacity to represent social-structural effects commonly observed in many human social networks, including general degree-based structural effects commonly observed in many human social networks as well as reciprocity and transitivity, and at the node-level, homophily and attribute-based activity and popularity effects, as derived from explicit hypotheses about dependencies among network ties. Parameters are given in terms of the prevalence of small subgraph configurations in the network and can be interpreted as describing the combinations of local social processes from which a given network emerges. These probability models for networks on a given set of actors allow generalization beyond the restrictive dyadic independence assumption of micro-networks, allowing models to be built from theoretical structural foundations of social behavior. Scale-free networks: A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. In network theory a scale-free ideal network is a random network with a degree distribution that unravels the size distribution of social groups. Specific characteristics of scale-free networks vary with the theories and analytical tools used to create them, however, in general, scale-free networks have some common characteristics. One notable characteristic in a scale-free network is the relative commonness of vertices with a degree that greatly exceeds the average. The highest-degree nodes are often called "hubs", and may serve specific purposes in their networks, although this depends greatly on the social context. Another general characteristic of scale-free networks is the clustering coefficient distribution, which decreases as the node degree increases. This distribution also follows a power law. The Barabási model of network evolution shown above is an example of a scale-free network. Rather than tracing interpersonal interactions, macro-level analyses generally trace the outcomes of interactions, such as economic or other resource transfer interactions over a large population. Large-scale networks: Large-scale network is a term somewhat synonymous with "macro-level." It is primarily used in social and behavioral sciences, and in economics. Originally, the term was used extensively in the computer sciences (see large-scale network mapping). Complex networks: Most larger social networks display features of social complexity, which involves substantial non-trivial features of network topology, with patterns of complex connections between elements that are neither purely regular nor purely random (see, complexity science, dynamical system and chaos theory), as do biological, and technological networks. Such complex network features include a heavy tail in the degree distribution, a high clustering coefficient, assortativity or disassortativity among vertices, community structure (see stochastic block model), and hierarchical structure. In the case of agency-directed networks these features also include reciprocity, triad significance profile (TSP, see network motif), and other features. In contrast, many of the mathematical models of networks that have been studied in the past, such as lattices and random graphs, do not show these features. Theoretical links Various theoretical frameworks have been imported for the use of social network analysis. The most prominent of these are Graph theory, Balance theory, Social comparison theory, and more recently, the Social identity approach. Few complete theories have been produced from social network analysis. Two that have are structural role theory and heterophily theory. The basis of Heterophily Theory was the finding in one study that more numerous weak ties can be important in seeking information and innovation, as cliques have a tendency to have more homogeneous opinions as well as share many common traits. This homophilic tendency was the reason for the members of the cliques to be attracted together in the first place. However, being similar, each member of the clique would also know more or less what the other members knew. To find new information or insights, members of the clique will have to look beyond the clique to its other friends and acquaintances. This is what Granovetter called "the strength of weak ties". Structural holes In the context of networks, social capital exists where people have an advantage because of their location in a network. Contacts in a network provide information, opportunities and perspectives that can be beneficial to the central player in the network. Most social structures tend to be characterized by dense clusters of strong connections. Information within these clusters tends to be rather homogeneous and redundant. Non-redundant information is most often obtained through contacts in different clusters. When two separate clusters possess non-redundant information, there is said to be a structural hole between them. Thus, a network that bridges structural holes will provide network benefits that are in some degree additive, rather than overlapping. An ideal network structure has a vine and cluster structure, providing access to many different clusters and structural holes. Networks rich in structural holes are a form of social capital in that they offer information benefits. The main player in a network that bridges structural holes is able to access information from diverse sources and clusters. For example, in business networks, this is beneficial to an individual's career because he is more likely to hear of job openings and opportunities if his network spans a wide range of contacts in different industries/sectors. This concept is similar to Mark Granovetter's theory of weak ties, which rests on the basis that having a broad range of contacts is most effective for job attainment. Structural holes have been widely applied in social network analysis, resulting in applications in a wide range of practical scenarios as well as machine learning-based social prediction. Research clusters Research has used network analysis to examine networks created when artists are exhibited together in museum exhibition. Such networks have been shown to affect an artist's recognition in history and historical narratives, even when controlling for individual accomplishments of the artist. Other work examines how network grouping of artists can affect an individual artist's auction performance. An artist's status has been shown to increase when associated with higher status networks, though this association has diminishing returns over an artist's career. In J.A. Barnes' day, a "community" referred to a specific geographic location and studies of community ties had to do with who talked, associated, traded, and attended church with whom. Today, however, there are extended "online" communities developed through telecommunications devices and social network services. Such devices and services require extensive and ongoing maintenance and analysis, often using network science methods. Community development studies, today, also make extensive use of such methods. Complex networks require methods specific to modelling and interpreting social complexity and complex adaptive systems, including techniques of dynamic network analysis. Mechanisms such as Dual-phase evolution explain how temporal changes in connectivity contribute to the formation of structure in social networks. The study of social networks is being used to examine the nature of interdependencies between actors and the ways in which these are related to outcomes of conflict and cooperation. Areas of study include cooperative behavior among participants in collective actions such as protests; promotion of peaceful behavior, social norms, and public goods within communities through networks of informal governance; the role of social networks in both intrastate conflict and interstate conflict; and social networking among politicians, constituents, and bureaucrats. In criminology and urban sociology, much attention has been paid to the social networks among criminal actors. For example, murders can be seen as a series of exchanges between gangs. Murders can be seen to diffuse outwards from a single source, because weaker gangs cannot afford to kill members of stronger gangs in retaliation, but must commit other violent acts to maintain their reputation for strength. Diffusion of ideas and innovations studies focus on the spread and use of ideas from one actor to another or one culture and another. This line of research seeks to explain why some become "early adopters" of ideas and innovations, and links social network structure with facilitating or impeding the spread of an innovation. A case in point is the social diffusion of linguistic innovation such as neologisms. Experiments and large-scale field trials (e.g., by Nicholas Christakis and collaborators) have shown that cascades of desirable behaviors can be induced in social groups, in settings as diverse as Honduras villages, Indian slums, or in the lab. Still other experiments have documented the experimental induction of social contagion of voting behavior, emotions, risk perception, and commercial products. In demography, the study of social networks has led to new sampling methods for estimating and reaching populations that are hard to enumerate (for example, homeless people or intravenous drug users.) For example, respondent driven sampling is a network-based sampling technique that relies on respondents to a survey recommending further respondents. The field of sociology focuses almost entirely on networks of outcomes of social interactions. More narrowly, economic sociology considers behavioral interactions of individuals and groups through social capital and social "markets". Sociologists, such as Mark Granovetter, have developed core principles about the interactions of social structure, information, ability to punish or reward, and trust that frequently recur in their analyses of political, economic and other institutions. Granovetter examines how social structures and social networks can affect economic outcomes like hiring, price, productivity and innovation and describes sociologists' contributions to analyzing the impact of social structure and networks on the economy. Analysis of social networks is increasingly incorporated into health care analytics, not only in epidemiological studies but also in models of patient communication and education, disease prevention, mental health diagnosis and treatment, and in the study of health care organizations and systems. Human ecology is an interdisciplinary and transdisciplinary study of the relationship between humans and their natural, social, and built environments. The scientific philosophy of human ecology has a diffuse history with connections to geography, sociology, psychology, anthropology, zoology, and natural ecology. In the study of literary systems, network analysis has been applied by Anheier, Gerhards and Romo, De Nooy, Senekal, and Lotker, to study various aspects of how literature functions. The basic premise is that polysystem theory, which has been around since the writings of Even-Zohar, can be integrated with network theory and the relationships between different actors in the literary network, e.g. writers, critics, publishers, literary histories, etc., can be mapped using visualization from SNA. Research studies of formal or informal organization relationships, organizational communication, economics, economic sociology, and other resource transfers. Social networks have also been used to examine how organizations interact with each other, characterizing the many informal connections that link executives together, as well as associations and connections between individual employees at different organizations. Many organizational social network studies focus on teams. Within team network studies, research assesses, for example, the predictors and outcomes of centrality and power, density and centralization of team instrumental and expressive ties, and the role of between-team networks. Intra-organizational networks have been found to affect organizational commitment, organizational identification, interpersonal citizenship behaviour. Social capital is a form of economic and cultural capital in which social networks are central, transactions are marked by reciprocity, trust, and cooperation, and market agents produce goods and services not mainly for themselves, but for a common good. Social capital is split into three dimensions: the structural, the relational and the cognitive dimension. The structural dimension describes how partners interact with each other and which specific partners meet in a social network. Also, the structural dimension of social capital indicates the level of ties among organizations. This dimension is highly connected to the relational dimension which refers to trustworthiness, norms, expectations and identifications of the bonds between partners. The relational dimension explains the nature of these ties which is mainly illustrated by the level of trust accorded to the network of organizations. The cognitive dimension analyses the extent to which organizations share common goals and objectives as a result of their ties and interactions. Social capital is a sociological concept about the value of social relations and the role of cooperation and confidence to achieve positive outcomes. The term refers to the value one can get from their social ties. For example, newly arrived immigrants can make use of their social ties to established migrants to acquire jobs they may otherwise have trouble getting (e.g., because of unfamiliarity with the local language). A positive relationship exists between social capital and the intensity of social network use. In a dynamic framework, higher activity in a network feeds into higher social capital which itself encourages more activity. This particular cluster focuses on brand-image and promotional strategy effectiveness, taking into account the impact of customer participation on sales and brand-image. This is gauged through techniques such as sentiment analysis which rely on mathematical areas of study such as data mining and analytics. This area of research produces vast numbers of commercial applications as the main goal of any study is to understand consumer behaviour and drive sales. In many organizations, members tend to focus their activities inside their own groups, which stifles creativity and restricts opportunities. A player whose network bridges structural holes has an advantage in detecting and developing rewarding opportunities. Such a player can mobilize social capital by acting as a "broker" of information between two clusters that otherwise would not have been in contact, thus providing access to new ideas, opinions and opportunities. British philosopher and political economist John Stuart Mill, writes, "it is hardly possible to overrate the value of placing human beings in contact with persons dissimilar to themselves.... Such communication [is] one of the primary sources of progress." Thus, a player with a network rich in structural holes can add value to an organization through new ideas and opportunities. This in turn, helps an individual's career development and advancement. A social capital broker also reaps control benefits of being the facilitator of information flow between contacts. Full communication with exploratory mindsets and information exchange generated by dynamically alternating positions in a social network promotes creative and deep thinking. In the case of consulting firm Eden McCallum, the founders were able to advance their careers by bridging their connections with former big three consulting firm consultants and mid-size industry firms. By bridging structural holes and mobilizing social capital, players can advance their careers by executing new opportunities between contacts. There has been research that both substantiates and refutes the benefits of information brokerage. A study of high tech Chinese firms by Zhixing Xiao found that the control benefits of structural holes are "dissonant to the dominant firm-wide spirit of cooperation and the information benefits cannot materialize due to the communal sharing values" of such organizations. However, this study only analyzed Chinese firms, which tend to have strong communal sharing values. Information and control benefits of structural holes are still valuable in firms that are not quite as inclusive and cooperative on the firm-wide level. In 2004, Ronald Burt studied 673 managers who ran the supply chain for one of America's largest electronics companies. He found that managers who often discussed issues with other groups were better paid, received more positive job evaluations and were more likely to be promoted. Thus, bridging structural holes can be beneficial to an organization, and in turn, to an individual's career. Computer networks combined with social networking software produce a new medium for social interaction. A relationship over a computerized social networking service can be characterized by context, direction, and strength. The content of a relation refers to the resource that is exchanged. In a computer-mediated communication context, social pairs exchange different kinds of information, including sending a data file or a computer program as well as providing emotional support or arranging a meeting. With the rise of electronic commerce, information exchanged may also correspond to exchanges of money, goods or services in the "real" world. Social network analysis methods have become essential to examining these types of computer mediated communication. In addition, the sheer size and the volatile nature of social media has given rise to new network metrics. A key concern with networks extracted from social media is the lack of robustness of network metrics given missing data. Based on the pattern of homophily, ties between people are most likely to occur between nodes that are most similar to each other, or within neighbourhood segregation, individuals are most likely to inhabit the same regional areas as other individuals who are like them. Therefore, social networks can be used as a tool to measure the degree of segregation or homophily within a social network. Social Networks can both be used to simulate the process of homophily but it can also serve as a measure of level of exposure of different groups to each other within a current social network of individuals in a certain area. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Facebook_Messenger] | [TOKENS: 2163]
Contents Facebook Messenger Messenger (formerly known as Facebook Messenger) is an American proprietary instant messaging service developed by Meta Platforms, the company that operates Facebook. Originally developed as Facebook Chat in 2008, the client application of Messenger is currently available on iOS and Android mobile platforms, Windows and macOS desktop platforms,[needs update] through the Messenger.com web application, and on the standalone Facebook Portal hardware. Messenger is used to send messages and exchange photos, videos, stickers, audio, and files, and also react to other users' messages and interact with bots. The service also supports voice and video calling. The standalone apps support using multiple accounts, conversations with end-to-end encryption, and playing games. With a monthly userbase of over 1 billion people, it is among the largest social media platforms. History Following tests of a new instant messaging platform on Facebook in March 2008, the feature, then-titled "Facebook Chat", was gradually released to users in April 2008. Facebook revamped its messaging platform in November 2010, and subsequently acquired group messaging service Beluga in March 2011, which the company used to launch its standalone iOS and Android mobile apps on August 9, 2011. Facebook later launched a BlackBerry version in October 2011. An app for Windows Phone, though lacking features including voice messaging and chat heads, was released in March 2014. In April 2014, Facebook announced that the messaging feature would be removed from the main Facebook app and users will be required to download the separate Messenger app. An iPad-optimized version of the iOS app was released in July 2014. In April 2015, Facebook launched a website interface for Messenger. A Tizen app was released on July 13, 2015. Facebook launched Messenger for Windows 10 in April 2016. In October 2016, Facebook released Messenger Lite, a stripped-down version of Messenger with a reduced feature set. The app is aimed primarily at old Android phones and regions where high-speed Internet is not widely available. In April 2017, Messenger Lite was expanded to 132 more countries. In May 2017, Facebook revamped the design for Messenger on Android and iOS, bringing a new home screen with tabs and categorization of content and interactive media, red dots indicating new activity, and relocated sections. Facebook announced a Messenger program for Windows 7 in a limited beta test in November 2011. The following month, Israeli blog TechIT leaked a download link for the program, with Facebook subsequently confirming and officially releasing the program. The program was eventually discontinued in March 2014. A Firefox web browser add-on was released in December 2012, but was also discontinued in March 2014. In December 2017, Facebook announced Messenger Kids, a new app aimed for persons under 13 years of age. The app comes with some differences compared to the standard version. In 2019, Messenger announced to be the 2nd most downloaded mobile app of the decade, from 2011 to 2019. In December 2019, Messenger dropped support for users to sign in using only a mobile number, meaning that users must sign in to a Facebook account in order to use the service. In March 2020, Facebook started to ship its dedicated Messenger for macOS app through the Mac App Store. The app is currently live in regions including France, Australia, Mexico, Poland, and many others. In April 2020, Facebook began rolling out a new feature called Messenger Rooms, a video chat feature that allows users to chat with up to 50 people at a time. The feature rivals Zoom, an application that gained a lot of popularity during the COVID-19 pandemic. Privacy concerns arose since the feature uses the same data collection policies as mainstream Facebook. In July 2020, Facebook added a new feature in Messenger that lets iOS users to use Apple's Face ID or Touch ID to lock their chats. The feature is called App Lock and is a part of several changes in Messenger regarding privacy and security. The option to view only "Unread Threads" was removed from the inbox, requiring the account holder to scroll through the entire inbox to be certain every unread message has been seen. On October 13, 2020, the Messenger application introduced cross-app messaging with Instagram, which was launched in September 2021. In addition to the integrated messaging, the application announced the introduction of a new logo, which should be an amalgamation of the Messenger and Instagram logo. The desktop app of Messenger was shut down on December 15, 2025. Messaging services were moved to the Facebook website or Messenger's site for those without an account on the former. The Messenger site will be discontinued on April 15, 2026. Messaging services will be moved to the Facebook website after that date. Features The following is a table of features available in Messenger, as well as their geographical coverage and what devices they are available on. In addition there is a vanishing message feature. In addition there is an audio recording feature which allows audio recordings of up to one minute which may or may not be vanishing: In April 2016, Facebook announced a bot platform for Messenger, including an API to build chat bots to interact with users. News publisher bots "message subscribers directly with news and other information", while ride-sharing apps can offer a transportation option, hotel chains can answer questions about accommodations, and air travel companies can allow for check-ins, flight updates and travel changes. At the 2017 Facebook F8 conference, Facebook announced a range of enhancements for bots: The slightly renamed "Discover" tab was officially launched in the United States in late June 2017. It is a video conferencing feature of Messenger. It allows users to add up to 50 people at a time. Messenger Rooms does not require a Facebook account. Messenger Rooms competes with other services such as Zoom. Back in 2014, Facebook introduced an unrelated, stand-alone application named Rooms, letting users create places for users with similar interests, with users being anonymous to others. This was shut down in December 2015. In April 2020, during the COVID-19 pandemic, Facebook revealed video conferencing features for Messenger called Messenger Rooms. This was seen as a response to the popularity of other video conferencing platforms such as Zoom and Skype in the midst of the COVID-19 pandemic. Messenger Rooms allows users to add up to 50 people per room, without restrictions on time. It does not require a Facebook account or a separate app from Messenger. When used, it only prompts the user for basic information. Users can add 360° virtual backgrounds, mood lighting, and other AR effects as well as share screens. To prevent unwanted participants from joining, users can lock rooms and remove participants. Some have voiced concerns in regards to Messenger Room's privacy and how its parent, Facebook, handles data. Messenger Rooms, unlike some of its competitors, does not use end-to-end encryption. In addition, there have been concerns over how Messenger Rooms collects user data. Monetization In January 2017, Facebook announced that it was testing showing advertisements in Messenger's home feed. At the time, the testing was limited to a "small number of users in Australia and Thailand", with the ad format being swipe-based carousel ads. In July, the company announced that they were expanding the testing to a global audience. Stan Chudnovsky, head of Messenger, told VentureBeat that "We'll start slow ... When the average user can be sure to see them we truly don't know because we're just going to be very data-driven and user feedback-driven on making that decision". Facebook told TechCrunch that the advertisements' placement in the inbox depends on factors such as thread count, phone screen size, and pixel density. In a TechCrunch editorial by Devin Coldewey, he described the ads as "huge" in the space they occupy, "intolerable" in the way they appear in the user interface, and "irrelevant" due to the lack of context. Coldewey finished by writing "Advertising is how things get paid for on the internet, including TechCrunch, so I'm not an advocate of eliminating it or blocking it altogether. But bad advertising experiences can spoil a perfectly good app like (for the purposes of argument) Messenger. Messaging is a personal, purposeful use case and these ads are a bad way to monetize it." Reception In November 2014, the Electronic Frontier Foundation (EFF) listed Messenger (Facebook chat) on its Secure Messaging Scorecard. It received a score of 2 out of 7 points on the scorecard. It received points for having communications encrypted in transit and for having recently completed an independent security audit. It missed points because the communications were not encrypted with keys the provider didn't have access to, users could not verify contacts' identities, past messages were not secure if the encryption keys were stolen, the source code was not open to independent review, and the security design was not properly documented. As stated by Facebook in its Help Center, there is no way to log out of the Messenger application. Instead, users can choose between different availability statuses, including "Appear as inactive", "Switch accounts", and "Turn off notifications". Media outlets have reported on a workaround, by pressing a "Clear data" option in the application's menu in Settings on Android devices, which returns the user to the log-in screen. User growth After being separated from the main Facebook app, Messenger had 600 million users in April 2015. This grew to 900 million in June 2016, 1 billion in July 2016, and 1.2 billion in April 2017. In March 2020, total messaging traffic increased by 50% in countries that were on quarantine due to the COVID-19 outbreak. Group calls grew by more than 1,000%. Government attempt at surveillance/decryption In early 2018, the US Department of Justice went to court to attempt to force Facebook to modify its Messenger app to enable surveillance by third parties so that agents could listen in on encrypted voice conversations over Messenger.: 1 The court decided against the Justice Department, but sealed the case.: 1 In November 2018, the ACLU and EFF filed suit to have the case unsealed so that the public can be informed about the encryption/surveillance debate.: 1 This motion was denied in February 2019, and an appeal was filed in April 2020. The appeals court confirmed the original denial on July 22, 2020. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Computer#cite_note-Hittinger-87] | [TOKENS: 10628]
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y − z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languages—some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_note-FOOTNOTECrotty199517-128] | [TOKENS: 10728]
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Interplanetary_contamination] | [TOKENS: 4458]
Contents Interplanetary contamination Interplanetary contamination refers to biological contamination of a planetary body by a space probe or spacecraft, either deliberate or unintentional. There are two types of interplanetary contamination: The main focus is on microbial life and on potentially invasive species. Non-biological forms of contamination have also been considered, including contamination of sensitive deposits (such as lunar polar ice deposits) of scientific interest. In the case of back contamination, multicellular life is thought unlikely but has not been ruled out. In the case of forward contamination, contamination by multicellular life (e.g. lichens) is unlikely to occur for robotic missions, but it becomes a consideration in crewed missions to Mars. Current space missions are governed by the Outer Space Treaty and the COSPAR guidelines for planetary protection. Forward contamination is prevented primarily by sterilizing the spacecraft. In the case of sample-return missions, the aim of the mission is to return extraterrestrial samples to Earth, and sterilization of the samples would make them of much less interest. So, back contamination would be prevented mainly by containment, and breaking the chain of contact between the planet of origin and Earth. It would also require quarantine procedures for the materials and for anyone who comes into contact with them. Overview Most of the Solar System appears hostile to life as we know it. No extraterrestrial life has ever been discovered. But if extraterrestrial life exists, it may be vulnerable to interplanetary contamination by foreign microorganisms. Some extremophiles may be able to survive space travel to another planet, and foreign life could possibly be introduced by spacecraft from Earth. If possible, some believe this poses scientific and ethical concerns. Locations within the Solar System where life might exist today include the oceans of liquid water beneath the icy surface of Europa, Enceladus, and Titan (its surface has oceans of liquid ethane / methane, but it may also have liquid water below the surface and ice volcanoes). There are multiple consequences for both forward- and back-contamination. If a planet becomes contaminated with Earth life, it might then be difficult to tell whether any lifeforms discovered originated there or came from Earth. Furthermore, the organic chemicals produced by the introduced life would confuse sensitive searches for biosignatures of living or ancient native life. The same applies to other more complex biosignatures. Life on other planets could have a common origin with Earth life, since in the early Solar System there was much exchange of material between the planets which could have transferred life as well. If so, it might be based on nucleic acids too (RNA or DNA). The majority of the species isolated are not well understood or characterized and cannot be cultured in labs, and are known only from DNA fragments obtained with swabs. On a contaminated planet, it might be difficult to distinguish the DNA of extraterrestrial life from the DNA of life brought to the planet by the exploring. Most species of microorganisms on Earth are not yet well understood or DNA sequenced. This particularly applies to the unculturable archaea, and so are difficult to study. This can be either because they depend on the presence of other microorganisms, are slow growing, or depend on other conditions not yet understood. In typical habitats, 99% of microorganisms are not culturable. Introduced Earth life could contaminate resources of value for future human missions, such as water. Invasive species could outcompete native life or consume it, if there is life on the planet. However, the experience on earth shows that species moved from one continent to another may be able to out compete the native life adapted to that continent. Additionally, evolutionary processes on Earth might have developed biological pathways different from extraterrestrial organisms, and so may be able to outcompete it. The same is also possible the other way around for contamination introduced to Earth's biosphere. In addition to science research concerns, there are also attempts to raise ethical and moral concerns regarding intentional or unintentional interplanetary transport of life. Evidence for possible habitats outside Earth Enceladus and Europa show the best evidence for current habitats, mainly due to the possibility of their hosting liquid water and organic compounds. There is ample evidence to suggest that Mars once offered habitable conditions for microbial life. It is therefore possible that microbial life may have existed on Mars, although no evidence has been found. It is thought that many bacterial spores (endospores) from Earth were transported on Mars spacecraft. Some may be protected within Martian rovers and landers on the shallow surface of the planet. In that sense, Mars may have already been contaminated. Certain lichens from the arctic permafrost are able to photosynthesize and grow in the absence of any liquid water, simply by using the humidity from the atmosphere. They are also highly tolerant of UV radiation, using melanin and other more specialized chemicals to protect their cells. Although numerous studies point to resistance to some of Mars conditions, they do so separately, and none have considered the full range of Martian surface conditions, including temperature, pressure, atmospheric composition, radiation, humidity, oxidizing regolith, and others, all at the same time and in combination. Laboratory simulations show that whenever multiple lethal factors are combined, the survival rates plummet quickly. Other studies have suggested the potential for life to survive using deliquescing salts. These, similarly to the lichens, use the humidity of the atmosphere. If the mixture of salts is right, the organisms may obtain liquid water at times of high atmospheric humidity, with salts capturing enough to be capable of supporting life. Research published in July 2017 shows that when irradiated with a simulated Martian UV flux, perchlorates become even more lethal to bacteria (bactericide effect). Even dormant spores lost viability within minutes. In addition, two other compounds of the Martian surface, iron oxides and hydrogen peroxide, act in synergy with irradiated perchlorates to cause a 10.8-fold increase in cell death when compared to cells exposed to UV radiation after 60 seconds of exposure. It was also found that abraded silicates (quartz and basalt) lead to the formation of toxic reactive oxygen species. The researchers concluded that "the surface of Mars is lethal to vegetative cells and renders much of the surface and near-surface regions uninhabitable." This research demonstrates that the present-day surface is more uninhabitable than previously thought, and reinforces the notion to inspect at least a few meters into the ground to ensure the levels of radiation would be relatively low. The Cassini spacecraft directly sampled the plumes escaping from Enceladus. Measured data indicates that these geysers are made primarily of salt rich particles with an 'ocean-like' composition, which is thought to originate from a subsurface ocean of liquid saltwater, rather than from the moon's icy surface. Data from the geyser flythroughs also indicate the presence of organic chemicals in the plumes. Heat scans of Enceladus's surface also indicate higher temperatures around the fissures where the geysers originate, with temperatures reaching −93 °C (−135 °F), which is 115 °C (207 °F) warmer than the surrounding surface regions. Europa has much indirect evidence for its sub-surface ocean. Models of how Europa is affected by tidal heating require a subsurface layer of liquid water in order to accurately reproduce the linear fracturing of the surface. Indeed, observations by the Galileo spacecraft of how Europa's magnetic field interacts with Jupiter's field strengthens the case for a liquid, rather than solid, layer; an electrically conductive fluid deep within Europa would explain these results. Observations from the Hubble Space Telescope in December 2012 appear to show an ice plume spouting from Europa's surface, which would immensely strengthen the case for a liquid subsurface ocean. As was the case for Enceladus, vapour geysers would allow for easy sampling of the liquid layer. Unfortunately, there appears to be little evidence that geysering is a frequent event on Europa due to the lack of water in the space near Europa. Planetary protection Forward contamination is prevented by sterilizing space probes sent to sensitive areas of the Solar System. Missions are classified depending on whether their destinations are of interest for the search for life, and whether there is any chance that Earth life could reproduce there. NASA made these policies official with the issuing of Management Manual NMI-4-4-1, NASA Unmanned Spacecraft Decontamination Policy on September 9, 1963. Prior to NMI-4-4-1 the same sterilization requirements were required on all outgoing spacecraft regardless of their target. Difficulties in the sterilization of Ranger probes sent to the Moon are the primary reasons for NASA's change to a target-by-target basis in assessing the likelihood forward contamination. Some destinations such as Mercury need no precautions at all. Others such as the Moon require documentation but nothing more, while destinations such as Mars require sterilization of the rovers sent there. Back contamination would be prevented by containment or quarantine. However, there have been no sample-returns thought to have any possibility of a back contamination risk since the Apollo missions. The Apollo regulations have been rescinded and new regulations have yet to be developed. See suggested precautions for sample-returns. Crewed spacecraft Crewed spacecraft are of particular concern for interplanetary contamination because of the impossibility to sterilize a human to the same level as a robotic spacecraft. Therefore, the chance of forwarding contamination is higher than for a robotic mission. Humans are typically host to a hundred trillion microorganisms in ten thousand species in the human microbiome which cannot be removed while preserving the life of the human. Containment seems the only option, but effective containment to the same standard as a robotic rover appears difficult to achieve with present-day technology. In particular, adequate containment in the event of a hard landing is a major challenge. Human explorers may be potential carriers back to Earth of microorganisms acquired on Mars, if such microorganisms exist. Another issue is the contamination of the water supply by Earth microorganisms shed by humans in their stools, skin and breath, which could have a direct effect on the long-term human colonization of Mars. Historical examples of measures taken to prevent planetary contamination of the moon include the inclusion of an anti-bacterial filter in the Apollo Lunar Module, from Apollo 13 and onward. This was placed on the cabin relief valve in order to prevent contaminants from the cabin being released into the lunar environment during the depressurization of the crew compartment, prior to EVA. The Moon The Apollo 11 missions incited public concern about the possibility of microbes on the Moon, creating fears about a plague being brought to Earth when the astronauts returned. NASA received thousands of letters from Americans concerned with the potential for back contamination. The Moon has been suggested as a testbed for new technology to protect sites in the Solar System, and astronauts, from forward and back contamination. Currently, the Moon has no contamination restrictions because it is considered to be "not of interest" for prebiotic chemistry and origins of life. Analysis of the contamination left by the Apollo program astronauts could also yield useful ground truth for planetary protection models. Non-contaminating exploration methods One of the most reliable ways to reduce the risk of forward and back contamination during visits to extraterrestrial bodies is to use only robotic spacecraft. Humans in close orbit around the target planet could control equipment on the surface in real time via telepresence, so bringing many of the benefits of a surface mission, without its associated increased forward and back contamination risks. Back contamination issues Since the Moon is now generally considered to be free from life, the most likely source of contamination would be from Mars during either a Mars sample-return mission or as a result of a crewed mission to Mars. The possibility of new human pathogens, or environmental disruption due to back contamination, is considered to be of extremely low probability but cannot yet be ruled out. NASA and ESA are actively developing a Mars Sample Return Program to return samples collected by the Perseverance Rover to Earth. The European Space Foundation report cites many advantages of a Mars sample-return. In particular, it would permit extensive analyses on Earth, without the size and weight constraints for instruments sent to Mars on rovers. These analyses could also be carried out without the communication delays for experiments carried out by Martian rovers. It would also make it possible to repeat experiments in multiple laboratories with different instruments to confirm key results. Carl Sagan was first to publicise back contamination issues that might follow from a Mars sample-return. In Cosmic Connection (1973) he wrote: Precisely because Mars is an environment of great potential biological interest, it is possible that on Mars there are pathogens, organisms which, if transported to the terrestrial environment, might do enormous biological damage. Later in Cosmos (1980) Carl Sagan wrote: Perhaps Martian samples can be safely returned to Earth. But I would want to be very sure before considering a returned-sample mission. NASA and ESA views are similar. The findings were that with present-day technology, Martian samples can be safely returned to Earth provided the right precautions are taken. NASA has already had experience with returning samples thought to represent a low back contamination risk when samples were returned for the first time by Apollo 11. At the time, it was thought that there was a low probability of life on the Moon, so the requirements were not very stringent. The precautions taken then were inadequate by current standards, however. The regulations used then have been rescinded, and new regulations and approaches for a sample-return would be needed. A sample-return mission would be designed to break the chain of contact between Mars and the exterior of the sample container, for instance, by sealing the returned container inside another larger container in the vacuum of space before it returns to Earth. In order to eliminate the risk of parachute failure, the capsule could fall at terminal velocity and the impact would be cushioned by the capsule's thermal protection system. The sample container would be designed to withstand the force of the impact. To receive, analyze and curate extraterrestrial soil samples, NASA has proposed to build a biohazard containment facility, tentatively known as the Mars Sample Return Receiving Facility (MSRRF). This future facility must be rated biohazard level 4 (BSL-4). While existing BSL-4 facilities deal primarily with fairly well-known organisms, a BSL-4 facility focused on extraterrestrial samples must pre-plan the systems carefully while being mindful that there will be unforeseen issues during sample evaluation and curation that will require independent thinking and solutions. The facility's systems must be able to contain unknown biohazards, as the sizes of any putative Martian microorganisms are unknown. In consideration of this, additional requirements were proposed. Ideally it should filter particles of 0.01 μm or larger, and release of a particle 0.05 μm or larger is unacceptable under any circumstance. The reason for this extremely small size limit of 0.01 μm is for consideration of gene transfer agents (GTAs) which are virus-like particles that are produced by some microorganisms that package random segments of DNA capable of horizontal gene transfer. These randomly incorporate segments of the host genome and can transfer them to other evolutionarily distant hosts, and do that without killing the new host. In this way many archaea and bacteria can swap DNA with each other. This raises the possibility that Martian life, if it has a common origin with Earth life in the distant past, could swap DNA with Earth microorganisms in the same way. In one experiment reported in 2010, researchers left GTAs (DNA conferring antibiotic resistance) and marine bacteria overnight in natural conditions and found that by the next day up to 47% of the bacteria had incorporated the genetic material from the GTAs. Another reason for the 0.05 μm limit is because of the discovery of ultramicrobacteria as small as 0.2 μm across. The BSL-4 containment facility must also double as a cleanroom to preserve the scientific value of the samples. A challenge is that, while it is relatively easy to simply contain the samples once returned to Earth, researchers would also want to remove parts of the sample and perform analyses. During all these handling procedures, the samples would need to be protected from Earthly contamination. A cleanroom is normally kept at a higher pressure than the external environment to keep contaminants out, while a biohazard laboratory is kept at a lower pressure to keep the biohazards in. This would require compartmentalizing the specialized rooms in order to combine them in a single building. Solutions suggested include a triple walled containment facility, and extensive robotic handling of the samples. The facility would be expected to take 7 to 10 years from design to completion, and an additional two years recommended for the staff to become accustomed to the facilities. Robert Zubrin, from the Mars Society, maintains that the risk of back contamination is negligible. He supports this using an argument based on the possibility of transfer of life from Earth to Mars on meteorites. Margaret Race has examined in detail the legal process of approval for a MSR. She found that under the National Environmental Policy Act (NEPA) (which did not exist in the Apollo era), a formal environment impact statement is likely to be required, and public hearings during which all the issues would be aired openly. This process is likely to take up to several years to complete. During this process, she found, the full range of worst accident scenarios, impact, and project alternatives would be played out in the public arena. Other agencies such as the Environment Protection Agency, Occupational Health and Safety Administration, etc., might also get involved in the decision-making process. The laws on quarantine would also need to be clarified as the regulations for the Apollo program were rescinded. In the Apollo era, NASA delayed announcement of its quarantine regulations until the day Apollo was launched, bypassing the requirement for public debate - something that would likely not be tolerated today. It is also probable that the presidential directive NSC-25 would apply, requiring a review of large scale alleged effects on the environment to be carried out subsequent to other domestic reviews and through a long process, leading eventually to presidential approval of the launch. Apart from those domestic legal hurdles, there would be numerous international regulations and treaties to be negotiated in the case of a Mars sample-return, especially those relating to environmental protection and health. Race concluded that the public of necessity has a significant role to play in the development of the policies governing Mars sample-return. Several exobiologists have suggested that a Mars sample-return is not necessary at this stage, and that it is better to focus more on in situ studies on the surface first. Although it is not their main motivation, this approach of course also eliminates back contamination risks. Some of these exobiologists advocate more in situ studies followed by a sample-return in the near future. Others go as far as to advocate in situ study instead of a sample-return at the present state of understanding of Mars. Their reasoning is that life on Mars is likely to be hard to find. Any present day life is likely to be sparse and occur in only a few niche habitats. Past life is likely to be degraded by cosmic radiation over geological time periods if exposed in the top few meters of the Mars surface. Also, only certain special deposits of salts or clays on Mars would have the capability to preserve organics for billions of years. So, they argue, there is a high risk that a Mars sample-return at our current stage of understanding would return samples that are no more conclusive about the origins of life on Mars or present day life than the Martian meteorite samples we already have. Another consideration is the difficulty of keeping the sample completely free from Earth life contamination during the return journey and during handling procedures on Earth. This might make it hard to show conclusively that any biosignatures detected does not result from contamination of the samples. Instead they advocate sending more sensitive instruments on Mars surface rovers. These could examine many different rocks and soil types, and search for biosignatures on the surface and so examine a wide range of materials which could not all be returned to Earth with current technology at reasonable cost. A sample-return to Earth would then be considered at a later stage, once we have a reasonably thorough understanding of conditions on Mars, and possibly have already detected life there, either current or past life, through biosignatures and other in situ analyses. During the “Exploration Telerobotics Symposium" in 2012, experts on telerobotics from industry, NASA, and academics met to discuss telerobotics and its applications to space exploration. Amongst other issues, particular attention was given to Mars missions and a Mars sample-return. They came to the conclusion that telerobotic approaches could permit direct study of the samples on the Mars surface via telepresence from Mars orbit, permitting rapid exploration and use of human cognition to take advantage of chance discoveries and feedback from the results obtained. They found that telepresence exploration of Mars has many advantages. The astronauts have near real-time control of the robots, and can respond immediately to discoveries. It also prevents contamination both ways and has mobility benefits as well. Finally, return of the sample to orbit has the advantage that it permits analysis of the sample without delay, to detect volatiles that may be lost during a voyage home. Similar methods could be used to directly explore other biologically sensitive moons such as Europa, Titan, or Enceladus, once human presence in the vicinity becomes possible. Forward contamination In August 2019, scientists reported that a capsule containing tardigrades (a resilient microbial animal) in a cryptobiotic state may have survived for a while on the Moon after the April 2019 crash landing of Beresheet, a failed Israeli lunar lander. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Elon_Musk#cite_note-145] | [TOKENS: 10515]
Contents Elon Musk Elon Reeve Musk (/ˈiːlɒn/ EE-lon; born June 28, 1971) is a businessman and entrepreneur known for his leadership of Tesla, SpaceX, Twitter, and xAI. Musk has been the wealthiest person in the world since 2025; as of February 2026,[update] Forbes estimates his net worth to be around US$852 billion. Born into a wealthy family in Pretoria, South Africa, Musk emigrated in 1989 to Canada; he has Canadian citizenship since his mother was born there. He received bachelor's degrees in 1997 from the University of Pennsylvania before moving to California to pursue business ventures. In 1995, Musk co-founded the software company Zip2. Following its sale in 1999, he co-founded X.com, an online payment company that later merged to form PayPal, which was acquired by eBay in 2002. Musk also became an American citizen in 2002. In 2002, Musk founded the space technology company SpaceX, becoming its CEO and chief engineer; the company has since led innovations in reusable rockets and commercial spaceflight. Musk joined the automaker Tesla as an early investor in 2004 and became its CEO and product architect in 2008; it has since become a leader in electric vehicles. In 2015, he co-founded OpenAI to advance artificial intelligence (AI) research, but later left; growing discontent with the organization's direction and their leadership in the AI boom in the 2020s led him to establish xAI, which became a subsidiary of SpaceX in 2026. In 2022, he acquired the social network Twitter, implementing significant changes, and rebranding it as X in 2023. His other businesses include the neurotechnology company Neuralink, which he co-founded in 2016, and the tunneling company the Boring Company, which he founded in 2017. In November 2025, a Tesla pay package worth $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Musk was the largest donor in the 2024 U.S. presidential election, where he supported Donald Trump. After Trump was inaugurated as president in early 2025, Musk served as Senior Advisor to the President and as the de facto head of the Department of Government Efficiency (DOGE). After a public feud with Trump, Musk left the Trump administration and returned to managing his companies. Musk is a supporter of global far-right figures, causes, and political parties. His political activities, views, and statements have made him a polarizing figure. Musk has been criticized for COVID-19 misinformation, promoting conspiracy theories, and affirming antisemitic, racist, and transphobic comments. His acquisition of Twitter was controversial due to a subsequent increase in hate speech and the spread of misinformation on the service, following his pledge to decrease censorship. His role in the second Trump administration attracted public backlash, particularly in response to DOGE. The emails he sent to Jeffrey Epstein are included in the Epstein files, which were published between 2025–26 and became a topic of worldwide debate. Early life Elon Reeve Musk was born on June 28, 1971, in Pretoria, South Africa's administrative capital. He is of British and Pennsylvania Dutch ancestry. His mother, Maye (née Haldeman), is a model and dietitian born in Saskatchewan, Canada, and raised in South Africa. Musk therefore holds both South African and Canadian citizenship from birth. His father, Errol Musk, is a South African electromechanical engineer, pilot, sailor, consultant, emerald dealer, and property developer, who partly owned a rental lodge at Timbavati Private Nature Reserve. His maternal grandfather, Joshua N. Haldeman, who died in a plane crash when Elon was a toddler, was an American-born Canadian chiropractor, aviator and political activist in the technocracy movement who moved to South Africa in 1950. Elon has a younger brother, Kimbal, a younger sister, Tosca, and four paternal half-siblings. Musk was baptized as a child in the Anglican Church of Southern Africa. Despite both Elon and Errol previously stating that Errol was a part owner of a Zambian emerald mine, in 2023, Errol recounted that the deal he made was to receive "a portion of the emeralds produced at three small mines". Errol was elected to the Pretoria City Council as a representative of the anti-apartheid Progressive Party and has said that his children shared their father's dislike of apartheid. After his parents divorced in 1979, Elon, aged around 9, chose to live with his father because Errol Musk had an Encyclopædia Britannica and a computer. Elon later regretted his decision and became estranged from his father. Elon has recounted trips to a wilderness school that he described as a "paramilitary Lord of the Flies" where "bullying was a virtue" and children were encouraged to fight over rations. In one incident, after an altercation with a fellow pupil, Elon was thrown down concrete steps and beaten severely, leading to him being hospitalized for his injuries. Elon described his father berating him after he was discharged from the hospital. Errol denied berating Elon and claimed, "The [other] boy had just lost his father to suicide, and Elon had called him stupid. Elon had a tendency to call people stupid. How could I possibly blame that child?" Elon was an enthusiastic reader of books, and had attributed his success in part to having read The Lord of the Rings, the Foundation series, and The Hitchhiker's Guide to the Galaxy. At age ten, he developed an interest in computing and video games, teaching himself how to program from the VIC-20 user manual. At age twelve, Elon sold his BASIC-based game Blastar to PC and Office Technology magazine for approximately $500 (equivalent to $1,600 in 2025). Musk attended Waterkloof House Preparatory School, Bryanston High School, and then Pretoria Boys High School, where he graduated. Musk was a decent but unexceptional student, earning a 61/100 in Afrikaans and a B on his senior math certification. Musk applied for a Canadian passport through his Canadian-born mother to avoid South Africa's mandatory military service, which would have forced him to participate in the apartheid regime, as well as to ease his path to immigration to the United States. While waiting for his application to be processed, he attended the University of Pretoria for five months. Musk arrived in Canada in June 1989, connected with a second cousin in Saskatchewan, and worked odd jobs, including at a farm and a lumber mill. In 1990, he entered Queen's University in Kingston, Ontario. Two years later, he transferred to the University of Pennsylvania, where he studied until 1995. Although Musk has said that he earned his degrees in 1995, the University of Pennsylvania did not award them until 1997 – a Bachelor of Arts in physics and a Bachelor of Science in economics from the university's Wharton School. He reportedly hosted large, ticketed house parties to help pay for tuition, and wrote a business plan for an electronic book-scanning service similar to Google Books. In 1994, Musk held two internships in Silicon Valley: one at energy storage startup Pinnacle Research Institute, which investigated electrolytic supercapacitors for energy storage, and another at Palo Alto–based startup Rocket Science Games. In 1995, he was accepted to a graduate program in materials science at Stanford University, but did not enroll. Musk decided to join the Internet boom of the 1990s, applying for a job at Netscape, to which he reportedly never received a response. The Washington Post reported that Musk lacked legal authorization to remain and work in the United States after failing to enroll at Stanford. In response, Musk said he was allowed to work at that time and that his student visa transitioned to an H1-B. According to numerous former business associates and shareholders, Musk said he was on a student visa at the time. Business career In 1995, Musk, his brother Kimbal, and Greg Kouri founded the web software company Zip2 with funding from a group of angel investors. They housed the venture at a small rented office in Palo Alto. Replying to Rolling Stone, Musk denounced the notion that they started their company with funds borrowed from Errol Musk, but in a tweet, he recognized that his father contributed 10% of a later funding round. The company developed and marketed an Internet city guide for the newspaper publishing industry, with maps, directions, and yellow pages. According to Musk, "The website was up during the day and I was coding it at night, seven days a week, all the time." To impress investors, Musk built a large plastic structure around a standard computer to create the impression that Zip2 was powered by a small supercomputer. The Musk brothers obtained contracts with The New York Times and the Chicago Tribune, and persuaded the board of directors to abandon plans for a merger with CitySearch. Musk's attempts to become CEO were thwarted by the board. Compaq acquired Zip2 for $307 million in cash in February 1999 (equivalent to $590,000,000 in 2025), and Musk received $22 million (equivalent to $43,000,000 in 2025) for his 7-percent share. In 1999, Musk co-founded X.com, an online financial services and e-mail payment company. The startup was one of the first federally insured online banks, and, in its initial months of operation, over 200,000 customers joined the service. The company's investors regarded Musk as inexperienced and replaced him with Intuit CEO Bill Harris by the end of the year. The following year, X.com merged with online bank Confinity to avoid competition. Founded by Max Levchin and Peter Thiel, Confinity had its own money-transfer service, PayPal, which was more popular than X.com's service. Within the merged company, Musk returned as CEO. Musk's preference for Microsoft software over Unix created a rift in the company and caused Thiel to resign. Due to resulting technological issues and lack of a cohesive business model, the board ousted Musk and replaced him with Thiel in 2000.[b] Under Thiel, the company focused on the PayPal service and was renamed PayPal in 2001. In 2002, PayPal was acquired by eBay for $1.5 billion (equivalent to $2,700,000,000 in 2025) in stock, of which Musk—the largest shareholder with 11.72% of shares—received $175.8 million (equivalent to $320,000,000 in 2025). In 2017, Musk purchased the domain X.com from PayPal for an undisclosed amount, stating that it had sentimental value. In 2001, Musk became involved with the nonprofit Mars Society and discussed funding plans to place a growth-chamber for plants on Mars. Seeking a way to launch the greenhouse payloads into space, Musk made two unsuccessful trips to Moscow to purchase intercontinental ballistic missiles (ICBMs) from Russian companies NPO Lavochkin and Kosmotras. Musk instead decided to start a company to build affordable rockets. With $100 million of his early fortune, (equivalent to $180,000,000 in 2025) Musk founded SpaceX in May 2002 and became the company's CEO and Chief Engineer. SpaceX attempted its first launch of the Falcon 1 rocket in 2006. Although the rocket failed to reach Earth orbit, it was awarded a Commercial Orbital Transportation Services program contract from NASA, then led by Mike Griffin. After two more failed attempts that nearly caused Musk to go bankrupt, SpaceX succeeded in launching the Falcon 1 into orbit in 2008. Later that year, SpaceX received a $1.6 billion NASA contract (equivalent to $2,400,000,000 in 2025) for Falcon 9-launched Dragon spacecraft flights to the International Space Station (ISS), replacing the Space Shuttle after its 2011 retirement. In 2012, the Dragon vehicle docked with the ISS, a first for a commercial spacecraft. Working towards its goal of reusable rockets, in 2015 SpaceX successfully landed the first stage of a Falcon 9 on a land platform. Later landings were achieved on autonomous spaceport drone ships, an ocean-based recovery platform. In 2018, SpaceX launched the Falcon Heavy; the inaugural mission carried Musk's personal Tesla Roadster as a dummy payload. Since 2019, SpaceX has been developing Starship, a reusable, super heavy-lift launch vehicle intended to replace the Falcon 9 and Falcon Heavy. In 2020, SpaceX launched its first crewed flight, the Demo-2, becoming the first private company to place astronauts into orbit and dock a crewed spacecraft with the ISS. In 2024, NASA awarded SpaceX an $843 million (equivalent to $865,000,000 in 2025) contract to build a spacecraft that NASA will use to deorbit the ISS at the end of its lifespan. In 2015, SpaceX began development of the Starlink constellation of low Earth orbit satellites to provide satellite Internet access. After the launch of prototype satellites in 2018, the first large constellation was deployed in May 2019. As of May 2025[update], over 7,600 Starlink satellites are operational, comprising 65% of all operational Earth satellites. The total cost of the decade-long project to design, build, and deploy the constellation was estimated by SpaceX in 2020 to be $10 billion (equivalent to $12,000,000,000 in 2025).[c] During the Russian invasion of Ukraine, Musk provided free Starlink service to Ukraine, permitting Internet access and communication at a yearly cost to SpaceX of $400 million (equivalent to $440,000,000 in 2025). However, Musk refused to block Russian state media on Starlink. In 2023, Musk denied Ukraine's request to activate Starlink over Crimea to aid an attack against the Russian navy, citing fears of a nuclear response. Tesla, Inc., originally Tesla Motors, was incorporated in July 2003 by Martin Eberhard and Marc Tarpenning. Both men played active roles in the company's early development prior to Musk's involvement. Musk led the Series A round of investment in February 2004; he invested $6.35 million (equivalent to $11,000,000 in 2025), became the majority shareholder, and joined Tesla's board of directors as chairman. Musk took an active role within the company and oversaw Roadster product design, but was not deeply involved in day-to-day business operations. Following a series of escalating conflicts in 2007 and the 2008 financial crisis, Eberhard was ousted from the firm.[page needed] Musk assumed leadership of the company as CEO and product architect in 2008. A 2009 lawsuit settlement with Eberhard designated Musk as a Tesla co-founder, along with Tarpenning and two others. Tesla began delivery of the Roadster, an electric sports car, in 2008. With sales of about 2,500 vehicles, it was the first mass production all-electric car to use lithium-ion battery cells. Under Musk, Tesla has since launched several well-selling electric vehicles, including the four-door sedan Model S (2012), the crossover Model X (2015), the mass-market sedan Model 3 (2017), the crossover Model Y (2020), and the pickup truck Cybertruck (2023). In May 2020, Musk resigned as chairman of the board as part of the settlement of a lawsuit from the SEC over him tweeting that funding had been "secured" for potentially taking Tesla private. The company has also constructed multiple lithium-ion battery and electric vehicle factories, called Gigafactories. Since its initial public offering in 2010, Tesla stock has risen significantly; it became the most valuable carmaker in summer 2020, and it entered the S&P 500 later that year. In October 2021, it reached a market capitalization of $1 trillion (equivalent to $1,200,000,000,000 in 2025), the sixth company in U.S. history to do so. Musk provided the initial concept and financial capital for SolarCity, which his cousins Lyndon and Peter Rive founded in 2006. By 2013, SolarCity was the second largest provider of solar power systems in the United States. In 2014, Musk promoted the idea of SolarCity building an advanced production facility in Buffalo, New York, triple the size of the largest solar plant in the United States. Construction of the factory started in 2014 and was completed in 2017. It operated as a joint venture with Panasonic until early 2020. Tesla acquired SolarCity for $2 billion in 2016 (equivalent to $2,700,000,000 in 2025) and merged it with its battery unit to create Tesla Energy. The deal's announcement resulted in a more than 10% drop in Tesla's stock price; at the time, SolarCity was facing liquidity issues. Multiple shareholder groups filed a lawsuit against Musk and Tesla's directors, stating that the purchase of SolarCity was done solely to benefit Musk and came at the expense of Tesla and its shareholders. Tesla directors settled the lawsuit in January 2020, leaving Musk the sole remaining defendant. Two years later, the court ruled in Musk's favor. In 2016, Musk co-founded Neuralink, a neurotechnology startup, with an investment of $100 million. Neuralink aims to integrate the human brain with artificial intelligence (AI) by creating devices that are embedded in the brain. Such technology could enhance memory or allow the devices to communicate with software. The company also hopes to develop devices to treat neurological conditions like spinal cord injuries. In 2022, Neuralink announced that clinical trials would begin by the end of the year. In September 2023, the Food and Drug Administration approved Neuralink to initiate six-year human trials. Neuralink has conducted animal testing on macaques at the University of California, Davis. In 2021, the company released a video in which a macaque played the video game Pong via a Neuralink implant. The company's animal trials—which have caused the deaths of some monkeys—have led to claims of animal cruelty. The Physicians Committee for Responsible Medicine has alleged that Neuralink violated the Animal Welfare Act. Employees have complained that pressure from Musk to accelerate development has led to botched experiments and unnecessary animal deaths. In 2022, a federal probe was launched into possible animal welfare violations by Neuralink.[needs update] In 2017, Musk founded the Boring Company to construct tunnels; he also revealed plans for specialized, underground, high-occupancy vehicles that could travel up to 150 miles per hour (240 km/h) and thus circumvent above-ground traffic in major cities. Early in 2017, the company began discussions with regulatory bodies and initiated construction of a 30-foot (9.1 m) wide, 50-foot (15 m) long, and 15-foot (4.6 m) deep "test trench" on the premises of SpaceX's offices, as that required no permits. The Los Angeles tunnel, less than two miles (3.2 km) in length, debuted to journalists in 2018. It used Tesla Model Xs and was reported to be a rough ride while traveling at suboptimal speeds. Two tunnel projects announced in 2018, in Chicago and West Los Angeles, have been canceled. A tunnel beneath the Las Vegas Convention Center was completed in early 2021. Local officials have approved further expansions of the tunnel system. April 14, 2022 In early 2017, Musk expressed interest in buying Twitter and had questioned the platform's commitment to freedom of speech. By 2022, Musk had reached 9.2% stake in the company, making him the largest shareholder.[d] Musk later agreed to a deal that would appoint him to Twitter's board of directors and prohibit him from acquiring more than 14.9% of the company. Days later, Musk made a $43 billion offer to buy Twitter. By the end of April Musk had successfully concluded his bid for approximately $44 billion. This included approximately $12.5 billion in loans and $21 billion in equity financing. Having backtracked on his initial decision, Musk bought the company on October 27, 2022. Immediately after the acquisition, Musk fired several top Twitter executives including CEO Parag Agrawal; Musk became the CEO instead. Under Elon Musk, Twitter instituted monthly subscriptions for a "blue check", and laid off a significant portion of the company's staff. Musk lessened content moderation and hate speech also increased on the platform after his takeover. In late 2022, Musk released internal documents relating to Twitter's moderation of Hunter Biden's laptop controversy in the lead-up to the 2020 presidential election. Musk also promised to step down as CEO after a Twitter poll, and five months later, Musk stepped down as CEO and transitioned his role to executive chairman and chief technology officer (CTO). Despite Musk stepping down as CEO, X continues to struggle with challenges such as viral misinformation, hate speech, and antisemitism controversies. Musk has been accused of trying to silence some of his critics such as Twitch streamer Asmongold, who criticized him during one of his streams. Musk has been accused of removing their accounts' blue checkmarks, which hinders visibility and is considered a form of shadow banning, or suspending their accounts without justification. Other activities In August 2013, Musk announced plans for a version of a vactrain, and assigned engineers from SpaceX and Tesla to design a transport system between Greater Los Angeles and the San Francisco Bay Area, at an estimated cost of $6 billion. Later that year, Musk unveiled the concept, dubbed the Hyperloop, intended to make travel cheaper than any other mode of transport for such long distances. In December 2015, Musk co-founded OpenAI, a not-for-profit artificial intelligence (AI) research company aiming to develop artificial general intelligence, intended to be safe and beneficial to humanity. Musk pledged $1 billion of funding to the company, and initially gave $50 million. In 2018, Musk left the OpenAI board. Since 2018, OpenAI has made significant advances in machine learning. In July 2023, Musk launched the artificial intelligence company xAI, which aims to develop a generative AI program that competes with existing offerings like OpenAI's ChatGPT. Musk obtained funding from investors in SpaceX and Tesla, and xAI hired engineers from Google and OpenAI. December 16, 2022 Musk uses a private jet owned by Falcon Landing LLC, a SpaceX-linked company, and acquired a second jet in August 2020. His heavy use of the jets and the consequent fossil fuel usage have received criticism. Musk's flight usage is tracked on social media through ElonJet. In December 2022, Musk banned the ElonJet account on Twitter, and made temporary bans on the accounts of journalists that posted stories regarding the incident, including Donie O'Sullivan, Keith Olbermann, and journalists from The New York Times, The Washington Post, CNN, and The Intercept. In October 2025, Musk's company xAI launched Grokipedia, an AI-generated online encyclopedia that he promoted as an alternative to Wikipedia. Articles on Grokipedia are generated and reviewed by xAI's Grok chatbot. Media coverage and academic analysis described Grokipedia as frequently reusing Wikipedia content but framing contested political and social topics in line with Musk's own views and right-wing narratives. A study by Cornell University researchers and NBC News stated that Grokipedia cites sources that are blacklisted or considered "generally unreliable" on Wikipedia, for example, the conspiracy site Infowars and the neo-Nazi forum Stormfront. Wired, The Guardian and Time criticized Grokipedia for factual errors and for presenting Musk himself in unusually positive terms while downplaying controversies. Politics Musk is an outlier among business leaders who typically avoid partisan political advocacy. Musk was a registered independent voter when he lived in California. Historically, he has donated to both Democrats and Republicans, many of whom serve in states in which he has a vested interest. Since 2022, his political contributions have mostly supported Republicans, with his first vote for a Republican going to Mayra Flores in the 2022 Texas's 34th congressional district special election. In 2024, he started supporting international far-right political parties, activists, and causes, and has shared misinformation and numerous conspiracy theories. Since 2024, his views have been generally described as right-wing. Musk supported Barack Obama in 2008 and 2012, Hillary Clinton in 2016, Joe Biden in 2020, and Donald Trump in 2024. In the 2020 Democratic Party presidential primaries, Musk endorsed candidate Andrew Yang and expressed support for Yang's proposed universal basic income, and endorsed Kanye West's 2020 presidential campaign. In 2021, Musk publicly expressed opposition to the Build Back Better Act, a $3.5 trillion legislative package endorsed by Joe Biden that ultimately failed to pass due to unanimous opposition from congressional Republicans and several Democrats. In 2022, gave over $50 million to Citizens for Sanity, a conservative political action committee. In 2023, he supported Republican Ron DeSantis for the 2024 U.S. presidential election, giving $10 million to his campaign, and hosted DeSantis's campaign announcement on a Twitter Spaces event. From June 2023 to January 2024, Musk hosted a bipartisan set of X Spaces with Republican and Democratic candidates, including Robert F. Kennedy Jr., Vivek Ramaswamy, and Dean Phillips. In October 2025, former vice-president Kamala Harris commented that it was a mistake from the Democratic side to not invite Musk to a White House electric vehicle event organized in August 2021 and featuring executives from General Motors, Ford and Stellantis, despite Tesla being "the major American manufacturer of extraordinary innovation in this space." Fortune remarked that this was a nod to United Auto Workers and organized labor. Harris said presidents should put aside political loyalties when it came to recognizing innovation, and guessed that the non-invitation impacted Musk's perspective. Fortune noted that, at the time, Musk said, "Yeah, seems odd that Tesla wasn't invited." A month later, he criticized Biden as "not the friendliest administration." Jacob Silverman, author of the book Gilded Rage: Elon Musk and the Radicalization of Silicon Valley, said that the tech industry represented by Musk, Thiel, Andreessen and other capitalists, actually flourished under Biden, but the tech leaders chose Trump for their common ground on cultural issues. By early 2024, Musk had become a vocal and financial supporter of Donald Trump. In July 2024, minutes after the attempted assassination of Donald Trump, Musk endorsed him for president saying; "I fully endorse President Trump and hope for his rapid recovery." During the presidential campaign, Musk joined Trump on stage at a campaign rally, and during the campaign promoted conspiracy theories and falsehoods about Democrats, election fraud and immigration, in support of Trump. Musk was the largest individual donor of the 2024 election. In 2025, Musk contributed $19 million to the Wisconsin Supreme Court race, hoping to influence the state's future redistricting efforts and its regulations governing car manufacturers and dealers. In 2023, Musk said he shunned the World Economic Forum because it was boring. The organization commented that they had not invited him since 2015. He has participated in Dialog, dubbed "Tech Bilderberg" and organized by Peter Thiel and Auren Hoffman, though. Musk's international political actions and comments have come under increasing scrutiny and criticism, especially from the governments and leaders of France, Germany, Norway, Spain and the United Kingdom, particularly due to his position in the U.S. government as well as ownership of X. An NBC News analysis found he had boosted far-right political movements to cut immigration and curtail regulation of business in at least 18 countries on six continents since 2023. During his speech after the second inauguration of Donald Trump, Musk twice made a gesture interpreted by many as a Nazi or a fascist Roman salute.[e] He thumped his right hand over his heart, fingers spread wide, and then extended his right arm out, emphatically, at an upward angle, palm down and fingers together. He then repeated the gesture to the crowd behind him. As he finished the gestures, he said to the crowd, "My heart goes out to you. It is thanks to you that the future of civilization is assured." It was widely condemned as an intentional Nazi salute in Germany, where making such gestures is illegal. The Anti-Defamation League said it was not a Nazi salute, but other Jewish organizations disagreed and condemned the salute. American public opinion was divided on partisan lines as to whether it was a fascist salute. Musk dismissed the accusations of Nazi sympathies, deriding them as "dirty tricks" and a "tired" attack. Neo-Nazi and white supremacist groups celebrated it as a Nazi salute. Multiple European political parties demanded that Musk be banned from entering their countries. The concept of DOGE emerged in a discussion between Musk and Donald Trump, and in August 2024, Trump committed to giving Musk an advisory role, with Musk accepting the offer. In November and December 2024, Musk suggested that the organization could help to cut the U.S. federal budget, consolidate the number of federal agencies, and eliminate the Consumer Financial Protection Bureau, and that its final stage would be "deleting itself". In January 2025, the organization was created by executive order, and Musk was designated a "special government employee". Musk led the organization and was a senior advisor to the president, although his official role is not clear. In sworn statement during a lawsuit, the director of the White House Office of Administration stated that Musk "is not an employee of the U.S. DOGE Service or U.S. DOGE Service Temporary Organization", "is not the U.S. DOGE Service administrator", and has "no actual or formal authority to make government decisions himself". Trump said two days later that he had put Musk in charge of DOGE. A federal judge has ruled that Musk acted as the de facto leader of DOGE. Musk's role in the second Trump administration, particularly in response to DOGE, has attracted public backlash. He was criticized for his treatment of federal government employees, including his influence over the mass layoffs of the federal workforce. He has prioritized secrecy within the organization and has accused others of violating privacy laws. A Senate report alleged that Musk could avoid up to $2 billion in legal liability as a result of DOGE's actions. In May 2025, Bill Gates accused Musk of "killing the world's poorest children" through his cuts to USAID, which modeling by Boston University estimated had resulted in 300,000 deaths by this time, most of them of children. By November 2025, the estimated death toll had increased to 400,000 children and 200,000 adults. Musk announced on May 28, 2025, that he would depart from the Trump administration as planned when the special government employee's 130 day deadline expired, with a White House official confirming that Musk's offboarding from the Trump administration was already underway. His departure was officially confirmed during a joint Oval Office press conference with Trump on May 30, 2025. @realDonaldTrump is in the Epstein files. That is the real reason they have not been made public. June 5, 2025 After leaving office, Musk criticized the Trump administration's Big Beautiful Bill, calling it a "disgusting abomination" due to its provisions increasing the deficit. A feud began between Musk and Trump, with its most notable event being Musk alleging Trump had ties to sex offender Jeffrey Epstein on X (formerly Twitter) on June 5, 2025. Trump responded on Truth Social stating that Musk went "CRAZY" after the "EV Mandate" was purportedly taken away and threatened to cut Musk's government contracts. Musk then called for a third Trump impeachment. The next day, Trump stated that he did not wish to reconcile with Musk, and added that Musk would face "very serious consequences" if he funds Democratic candidates. On June 11, Musk publicly apologized for the tweets against Trump, saying they "went too far". Views November 6, 2022 Rejecting the conservative label, Musk has described himself as a political moderate, even as his views have become more right-wing over time. His views have been characterized as libertarian and far-right, and after his involvement in European politics, they have received criticism from world leaders such as Emmanuel Macron and Olaf Scholz. Within the context of American politics, Musk supported Democratic candidates up until 2022, at which point he voted for a Republican for the first time. He has stated support for universal basic income, gun rights, freedom of speech, a tax on carbon emissions, and H-1B visas. Musk has expressed concern about issues such as artificial intelligence (AI) and climate change, and has been a critic of wealth tax, short-selling, and government subsidies. An immigrant himself, Musk has been accused of being anti-immigration, and regularly blames immigration policies for illegal immigration. He is also a pronatalist who believes population decline is the biggest threat to civilization, and identifies as a cultural Christian. Musk has long been an advocate for space colonization, especially the colonization of Mars. He has repeatedly pushed for humanity colonizing Mars, in order to become an interplanetary species and lower the risks of human extinction. Musk has promoted conspiracy theories and made controversial statements that have led to accusations of racism, sexism, antisemitism, transphobia, disseminating disinformation, and support of white pride. While describing himself as a "pro-Semite", his comments regarding George Soros and Jewish communities have been condemned by the Anti-Defamation League and the Biden White House. Musk was criticized during the COVID-19 pandemic for making unfounded epidemiological claims, defying COVID-19 lockdowns restrictions, and supporting the Canada convoy protest against vaccine mandates. He has amplified false claims of white genocide in South Africa. Musk has been critical of Israel's actions in the Gaza Strip during the Gaza war, praised China's economic and climate goals, suggested that Taiwan and China should resolve cross-strait relations, and was described as having a close relationship with the Chinese government. In Europe, Musk expressed support for Ukraine in 2022 during the Russian invasion, recommended referendums and peace deals on the annexed Russia-occupied territories, and supported the far-right Alternative for Germany political party in 2024. Regarding British politics, Musk blamed the 2024 UK riots on mass migration and open borders, criticized Prime Minister Keir Starmer for what he described as a "two-tier" policing system, and was subsequently attacked as being responsible for spreading misinformation and amplifying the far-right. He has also voiced his support for far-right activist Tommy Robinson and pledged electoral support for Reform UK. In February 2026, Musk described Spanish Prime Minister Pedro Sánchez as a "tyrant" following Sánchez's proposal to prohibit minors under the age of 16 from accessing social media platforms. Legal affairs In 2018, Musk was sued by the U.S. Securities and Exchange Commission (SEC) for a tweet stating that funding had been secured for potentially taking Tesla private.[f] The securities fraud lawsuit characterized the tweet as false, misleading, and damaging to investors, and sought to bar Musk from serving as CEO of publicly traded companies. Two days later, Musk settled with the SEC, without admitting or denying the SEC's allegations. As a result, Musk and Tesla were fined $20 million each, and Musk was forced to step down for three years as Tesla chairman but was able to remain as CEO. Shareholders filed a lawsuit over the tweet, and in February 2023, a jury found Musk and Tesla not liable. Musk has stated in interviews that he does not regret posting the tweet that triggered the SEC investigation. In 2019, Musk stated in a tweet that Tesla would build half a million cars that year. The SEC reacted by asking a court to hold him in contempt for violating the terms of the 2018 settlement agreement. A joint agreement between Musk and the SEC eventually clarified the previous agreement details, including a list of topics about which Musk needed preclearance. In 2020, a judge blocked a lawsuit that claimed a tweet by Musk regarding Tesla stock price ("too high imo") violated the agreement. Freedom of Information Act (FOIA)-released records showed that the SEC concluded Musk had subsequently violated the agreement twice by tweeting regarding "Tesla's solar roof production volumes and its stock price". In October 2023, the SEC sued Musk over his refusal to testify a third time in an investigation into whether he violated federal law by purchasing Twitter stock in 2022. In February 2024, Judge Laurel Beeler ruled that Musk must testify again. In January 2025, the SEC filed a lawsuit against Musk for securities violations related to his purchase of Twitter. In January 2024, Delaware judge Kathaleen McCormick ruled in a 2018 lawsuit that Musk's $55 billion pay package from Tesla be rescinded. McCormick called the compensation granted by the company's board "an unfathomable sum" that was unfair to shareholders. The Delaware Supreme Court overturned McCormick's decision in December 2025, restoring Musk's compensation package and awarding $1 in nominal damages. Personal life Musk became a U.S. citizen in 2002. From the early 2000s until late 2020, Musk resided in California, where both Tesla and SpaceX were founded. He then relocated to Cameron County, Texas, saying that California had become "complacent" about its economic success. While hosting Saturday Night Live in 2021, Musk stated that he has Asperger syndrome (an outdated term for autism spectrum disorder). When asked about his experience growing up with Asperger's syndrome in a TED2022 conference in Vancouver, Musk stated that "the social cues were not intuitive ... I would just tend to take things very literally ... but then that turned out to be wrong — [people were not] simply saying exactly what they mean, there's all sorts of other things that are meant, and [it] took me a while to figure that out." Musk suffers from back pain and has undergone several spine-related surgeries, including a disc replacement. In 2000, he contracted a severe case of malaria while on vacation in South Africa. Musk has stated he uses doctor-prescribed ketamine for occasional depression and that he doses "a small amount once every other week or something like that"; since January 2024, some media outlets have reported that he takes ketamine, marijuana, LSD, ecstasy, mushrooms, cocaine and other drugs. Musk at first refused to comment on his alleged drug use, before responding that he had not tested positive for drugs, and that if drugs somehow improved his productivity, "I would definitely take them!". The New York Times' investigations revealed Musk's overuse of ketamine and numerous other drugs, as well as strained family relationships and concerns from close associates who have become troubled by his public behavior as he became more involved in political activities and government work. According to The Washington Post, President Trump described Musk as "a big-time drug addict". Through his own label Emo G Records, Musk released a rap track, "RIP Harambe", on SoundCloud in March 2019. The following year, he released an EDM track, "Don't Doubt Ur Vibe", featuring his own lyrics and vocals. Musk plays video games, which he stated has a "'restoring effect' that helps his 'mental calibration'". Some games he plays include Quake, Diablo IV, Elden Ring, and Polytopia. Musk once claimed to be one of the world's top video game players but has since admitted to "account boosting", or cheating by hiring outside services to achieve top player rankings. Musk has justified the boosting by claiming that all top accounts do it so he has to as well to remain competitive. In 2024 and 2025, Musk criticized the video game Assassin's Creed Shadows and its creator Ubisoft for "woke" content. Musk posted to X that "DEI kills art" and specified the inclusion of the historical figure Yasuke in the Assassin's Creed game as offensive; he also called the game "terrible". Ubisoft responded by saying that Musk's comments were "just feeding hatred" and that they were focused on producing a game not pushing politics. Musk has fathered at least 14 children, one of whom died as an infant. The Wall Street Journal reported in 2025 that sources close to Musk suggest that the "true number of Musk's children is much higher than publicly known". He had six children with his first wife, Canadian author Justine Wilson, whom he met while attending Queen's University in Ontario, Canada; they married in 2000. In 2002, their first child Nevada Musk died of sudden infant death syndrome at the age of 10 weeks. After his death, the couple used in vitro fertilization (IVF) to continue their family; they had twins in 2004, followed by triplets in 2006. The couple divorced in 2008 and have shared custody of their children. The elder twin he had with Wilson came out as a trans woman and, in 2022, officially changed her name to Vivian Jenna Wilson, adopting her mother's surname because she no longer wished to be associated with Musk. Musk began dating English actress Talulah Riley in 2008. They married two years later at Dornoch Cathedral in Scotland. In 2012, the couple divorced, then remarried the following year. After briefly filing for divorce in 2014, Musk finalized a second divorce from Riley in 2016. Musk then dated the American actress Amber Heard for several months in 2017; he had reportedly been "pursuing" her since 2012. In 2018, Musk and Canadian musician Grimes confirmed they were dating. Grimes and Musk have three children, born in 2020, 2021, and 2022.[g] Musk and Grimes originally gave their eldest child the name "X Æ A-12", which would have violated California regulations as it contained characters that are not in the modern English alphabet; the names registered on the birth certificate are "X" as a first name, "Æ A-Xii" as a middle name, and "Musk" as a last name. They received criticism for choosing a name perceived to be impractical and difficult to pronounce; Musk has said the intended pronunciation is "X Ash A Twelve". Their second child was born via surrogacy. Despite the pregnancy, Musk confirmed reports that the couple were "semi-separated" in September 2021; in an interview with Time in December 2021, he said he was single. In October 2023, Grimes sued Musk over parental rights and custody of X Æ A-Xii. Elon Musk has taken X Æ A-Xii to multiple official events in Washington, D.C. during Trump's second term in office. Also in July 2022, The Wall Street Journal reported that Musk allegedly had an affair with Nicole Shanahan, the wife of Google co-founder Sergey Brin, in 2021, leading to their divorce the following year. Musk denied the report. Musk also had a relationship with Australian actress Natasha Bassett, who has been described as "an occasional girlfriend". In October 2024, The New York Times reported Musk bought a Texas compound for his children and their mothers, though Musk denied having done so. Musk also has four children with Shivon Zilis, director of operations and special projects at Neuralink: twins born via IVF in 2021, a child born in 2024 via surrogacy and a child born in 2025.[h] On February 14, 2025, Ashley St. Clair, an influencer and author, posted on X claiming to have given birth to Musk's son Romulus five months earlier, which media outlets reported as Musk's supposed thirteenth child.[i] On February 22, 2025, it was reported that St Clair had filed for sole custody of her five-month-old son and for Musk to be recognised as the child's father. On March 31, 2025, Musk wrote that, while he was unsure if he was the father of St. Clair's child, he had paid St. Clair $2.5 million and would continue paying her $500,000 per year.[j] Later reporting from the Wall Street Journal indicated that $1 million of these payments to St. Clair were structured as a loan. In 2014, Musk and Ghislaine Maxwell appeared together in a photograph taken at an Academy Awards after-party, which Musk later described as a "photobomb". The January 2026 Epstein files contain emails between Musk and Epstein from 2012 to 2013, after Epstein's first conviction. Emails released on January 30, 2026, indicated that Epstein invited Musk to visit his private island on multiple occasions. The correspondence showed that while Epstein repeatedly encouraged Musk to attend, Musk did not visit the island. In one instance, Musk discussed the possibility of attending a party with his then-wife Talulah Riley and asked which day would be the "wildest party"; according to the emails, the visit did not take place after Epstein later cancelled the plans.[k] On Christmas day in 2012, Musk emailed Epstein asking "Do you have any parties planned? I’ve been working to the edge of sanity this year and so, once my kids head home after Christmas, I really want to hit the party scene in St Barts or elsewhere and let loose. The invitation is much appreciated, but a peaceful island experience is the opposite of what I’m looking for". Epstein replied that the "ratio on my island" might make Musk's wife uncomfortable to which Musk responded, "Ratio is not a problem for Talulah". On September 11, 2013, Epstein sent an email asking Musk if he had any plans for coming to New York for the opening of the United Nations General Assembly where many "interesting people" would be coming to his house to which Musk responded that "Flying to NY to see UN diplomats do nothing would be an unwise use of time". Epstein responded by stating "Do you think i am retarded. Just kidding, there is no one over 25 and all very cute." Musk has denied any close relationship with Epstein and described him as a "creep" who attempted to ingratiate himself with influential people. When Musk was asked in 2019 if he introduced Epstein to Mark Zuckerberg, Musk responded: "I don’t recall introducing Epstein to anyone, as I don’t know the guy well enough to do so." The released emails nonetheless showed cordial exchanges on a range of topics, including Musk's inquiry about parties on the island. The correspondence also indicated that Musk suggested hosting Epstein at SpaceX, while Epstein separately discussed plans to tour SpaceX and bring "the girls", though there is no evidence that such a visit occurred. Musk has described the release of the files a "distraction", later accusing the second Trump administration of suppressing them to protect powerful individuals, including Trump himself.[l] Wealth Elon Musk is the wealthiest person in the world, with an estimated net worth of US$690 billion as of January 2026, according to the Bloomberg Billionaires Index, and $852 billion according to Forbes, primarily from his ownership stakes in SpaceX and Tesla. Having been first listed on the Forbes Billionaires List in 2012, around 75% of Musk's wealth was derived from Tesla stock in November 2020, although he describes himself as "cash poor". According to Forbes, he became the first person in the world to achieve a net worth of $300 billion in 2021; $400 billion in December 2024; $500 billion in October 2025; $600 billion in mid-December 2025; $700 billion later that month; and $800 billion in February 2026. In November 2025, a Tesla pay package worth potentially $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Public image Although his ventures have been highly influential within their separate industries starting in the 2000s, Musk only became a public figure in the early 2010s. He has been described as an eccentric who makes spontaneous and impactful decisions, while also often making controversial statements, contrary to other billionaires who prefer reclusiveness to protect their businesses. Musk's actions and his expressed views have made him a polarizing figure. Biographer Ashlee Vance described people's opinions of Musk as polarized due to his "part philosopher, part troll" persona on Twitter. He has drawn denouncement for using his platform to mock the self-selection of personal pronouns, while also receiving praise for bringing international attention to matters like British survivors of grooming gangs. Musk has been described as an American oligarch due to his extensive influence over public discourse, social media, industry, politics, and government policy. After Trump's re-election, Musk's influence and actions during the transition period and the second presidency of Donald Trump led some to call him "President Musk", the "actual president-elect", "shadow president" or "co-president". Awards for his contributions to the development of the Falcon rockets include the American Institute of Aeronautics and Astronautics George Low Transportation Award in 2008, the Fédération Aéronautique Internationale Gold Space Medal in 2010, and the Royal Aeronautical Society Gold Medal in 2012. In 2015, he received an honorary doctorate in engineering and technology from Yale University and an Institute of Electrical and Electronics Engineers Honorary Membership. Musk was elected a Fellow of the Royal Society (FRS) in 2018.[m] In 2022, Musk was elected to the National Academy of Engineering. Time has listed Musk as one of the most influential people in the world in 2010, 2013, 2018, and 2021. Musk was selected as Time's "Person of the Year" for 2021. Then Time editor-in-chief Edward Felsenthal wrote that, "Person of the Year is a marker of influence, and few individuals have had more influence than Musk on life on Earth, and potentially life off Earth too." Notes References Works cited Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Minecraft#cite_ref-348] | [TOKENS: 12858]
Contents Minecraft Minecraft is a sandbox game developed and published by Mojang Studios. Following its initial public alpha release in 2009, it was formally released in 2011 for personal computers. The game has since been ported to numerous platforms, including mobile devices and various video game consoles. In Minecraft, players explore a procedurally generated world with virtually infinite terrain made up of voxels (cubes). They can discover and extract raw materials, craft tools and items, build structures, fight hostile mobs, and cooperate with or compete against other players in multiplayer. The game's large community offers a wide variety of user-generated content, such as modifications, servers, player skins, texture packs, and custom maps, which add new game mechanics and possibilities. Originally created by Markus "Notch" Persson using the Java programming language, Jens "Jeb" Bergensten was handed control over the game's development following its full release. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for US$2.5 billion; Xbox Game Studios hold the publishing rights for the Bedrock Edition, the unified cross-platform version which evolved from the Pocket Edition codebase[i] and replaced the legacy console versions. Bedrock is updated concurrently with Mojang's original Java Edition, although with numerous, generally small, differences. Minecraft is the best-selling video game in history with over 350 million copies sold. It has received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions have played prominent roles in popularizing it. The wider Minecraft franchise includes several spin-off games, such as Minecraft: Story Mode, Minecraft Dungeons, and Minecraft Legends. A film adaptation, titled A Minecraft Movie, was released in 2025 and became the second highest-grossing video game film of all time. Gameplay Minecraft is a 3D sandbox video game that has no required goals to accomplish, giving players a large amount of freedom in choosing how to play the game. The game features an optional achievement system. Gameplay is in the first-person perspective by default, but players have the option of third-person perspectives. The game world is composed of rough 3D objects—mainly cubes, referred to as blocks—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a voxel grid, while players can move freely around the world. Players can break, or mine, blocks and then place them elsewhere, enabling them to build things. Very few blocks are affected by gravity, instead maintaining their voxel position in the air. Players can also craft a wide variety of items, such as armor, which mitigates damage from attacks; weapons (such as swords or bows and arrows), which allow monsters and animals to be killed more easily; and tools (such as pickaxes or shovels), which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. They may also freely craft helpful blocks—such as furnaces which can cook food and smelt ores, and torches that produce light—or exchange items with villagers (NPC) through trading emeralds for different goods and vice versa. The game has an inventory system, allowing players to carry a limited number of items. The in-game time system follows a day and night cycle, with one full cycle lasting for 20 real-time minutes. The game also contains a material called redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems. New players are given a randomly selected default character skin out of nine possibilities, including Steve or Alex, but are able to create and upload their own skins. Players encounter various mobs (short for mobile entities) including animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, spawn during the daytime and can be hunted for food and crafting materials, while hostile mobs—including large spiders, witches, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies and skeletons, burn under the sun if they have no headgear and are not standing in water. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk and drowned variants that spawn in deserts and oceans, respectively. The Minecraft environment is procedurally generated as players explore it using a map seed that is randomly chosen at the time of world creation (or manually specified by the player). Divided into biomes representing different environments with unique resources and structures, worlds are designed to be effectively infinite in traditional gameplay, though technical limits on the player have existed throughout development, both intentionally and not. Implementation of horizontally infinite generation initially resulted in a glitch termed the "Far Lands" at over 12 million blocks away from the world center, where terrain generated as wall-like, fissured patterns. The Far Lands and associated glitches were considered the effective edge of the world until they were resolved, with the current horizontal limit instead being a special impassable barrier called the world border, located 30 million blocks away. Vertical space is comparatively limited, with an unbreakable bedrock layer at the bottom and a building limit several hundred blocks into the sky. Minecraft features three independent dimensions accessible through portals and providing alternate game environments. The Overworld is the starting dimension and represents the real world, with a terrestrial surface setting including plains, mountains, forests, oceans, caves, and small sources of lava. The Nether is a hell-like underworld dimension accessed via an obsidian portal and composed mainly of lava. Mobs that populate the Nether include shrieking, fireball-shooting ghasts, alongside anthropomorphic pigs called piglins and their zombified counterparts. Piglins in particular have a bartering system, where players can give them gold ingots and receive items in return. Structures known as Nether Fortresses generate in the Nether, containing mobs such as wither skeletons and blazes, which can drop blaze rods needed to access the End dimension. The player can also choose to build an optional boss mob known as the Wither, using skulls obtained from wither skeletons and soul sand. The End can be reached through an end portal, consisting of twelve end portal frames. End portals are found in underground structures in the Overworld known as strongholds. To find strongholds, players must craft eyes of ender using an ender pearl and blaze powder. Eyes of ender can then be thrown, traveling in the direction of the stronghold. Once the player reaches the stronghold, they can place eyes of ender into each portal frame to activate the end portal. The dimension consists of islands floating in a dark, bottomless void. A boss enemy called the Ender Dragon guards the largest, central island. Killing the dragon opens access to an exit portal, which, when entered, cues the game's ending credits and the End Poem, a roughly 1,500-word work written by Irish novelist Julian Gough, which takes about nine minutes to scroll past, is the game's only narrative text, and the only text of significant length directed at the player.: 10–12 At the conclusion of the credits, the player is teleported back to their respawn point and may continue the game indefinitely. In Survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter in order to survive at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game unless the player is playing on peaceful difficulty. If the hunger bar is empty, the player starves. Health replenishes when players have a full hunger bar or continuously on peaceful. Upon losing all health, players die. The items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game and can be changed by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they despawn after 5 minutes. Players may acquire experience points (commonly referred to as "xp" or "exp") by killing mobs and other players, mining, smelting ores, animal breeding, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects. The game features two more game modes based on Survival, known as Hardcore mode and Adventure mode. Hardcore mode plays identically to Survival mode, but with the game's difficulty setting locked to "Hard" and with permadeath, forcing them to delete the world or explore it as a spectator after dying. Adventure mode was added to the game in a post-launch update, and prevents the player from directly modifying the game's world. It was designed primarily for use in custom maps, allowing map designers to let players experience it as intended. In Creative mode, players have access to an infinite number of all resources and items in the game through the inventory menu and can place or mine them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters usually do not take any damage nor are affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance. Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, local area network (LAN) play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own server by making a realm, using a host provider, hosting one themselves or connect directly to another player's game via Xbox Live, PlayStation Network or Nintendo Switch Online. Single-player worlds have LAN support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use server addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3,000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Bedrock Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018. The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, items, and mobs to entire arrays of mechanisms. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as mini-maps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) that often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new achievements, dimensions, functions, loot tables, predicates, recipes, structures, tags, and world generation. The Xbox 360 Edition supported downloadable content, which was available to purchase via the Xbox Games Store; these content packs usually contained additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combined texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition did not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released exclusively for the Wii U Edition worldwide on 17 May 2016, and later bundled free with the Nintendo Switch Edition at launch. Another based on Fallout was released on consoles that December, and for Windows and Mobile in April 2017. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would run only when the image containing the skin itself was opened. In June 2017, Mojang released the "1.1 Discovery Update" to the Pocket Edition of the game, which later became the Bedrock Edition. The update introduced the "Marketplace", a catalogue of purchasable user-generated content intended to give Minecraft creators "another way to make a living from the game". Various skins, maps, texture packs and add-ons from different creators can be bought with "Minecoins", a digital currency that is purchased with real money. Additionally, users can access specific content with a subscription service titled "Marketplace Pass". Alongside content from independent creators, the Marketplace also houses items published by Mojang and Microsoft themselves, as well as official collaborations between Minecraft and other intellectual properties. By 2022, the Marketplace had over 1.7 billion content downloads, generating over $500 million in revenue. Development Before creating Minecraft, Markus "Notch" Persson was a game developer at King, where he worked until March 2009. At King, he primarily developed browser games and learned several programming languages. During his free time, he prototyped his own games, often drawing inspiration from other titles, and was an active participant on the TIGSource forums for independent developers. One such project was "RubyDung", a base-building game inspired by Dwarf Fortress, but with an isometric, three-dimensional perspective similar to RollerCoaster Tycoon. Among the features in RubyDung that he explored was a first-person view similar to Dungeon Keeper, though he ultimately discarded this idea, feeling the graphics were too pixelated at the time. Around March 2009, Persson left King and joined jAlbum, while continuing to work on his prototypes. Infiniminer, a block-based open-ended mining game first released in April 2009, inspired Persson's vision for RubyDung's future direction. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements. The first public alpha build of Minecraft was released on 17 May 2009 on TIGSource. Over the years, Persson regularly released test builds that added new features, including tools, mobs, and entire new dimensions. In 2011, partly due to the game's rising popularity, Persson decided to release a full 1.0 version—a second part of the "Adventure Update"—on 18 November 2011. Shortly after, Persson stepped down from development, handing the project's lead to Jens "Jeb" Bergensten. On 15 September 2014, Microsoft, the developer behind the Microsoft Windows operating system and Xbox video game console, announced a $2.5 billion acquisition of Mojang, which included the Minecraft intellectual property. Persson had suggested the deal on Twitter, asking a corporation to buy his stake in the game after receiving criticism for enforcing terms in the game's end-user license agreement (EULA), which had been in place for the past three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014 and led to Persson becoming one of Forbes' "World's Billionaires". After 2014, Minecraft's primary versions received usually annual major updates—free to players who have purchased the game— each primarily centered around a specific theme. For instance, version 1.13, the Update Aquatic, focused on ocean-related features, while version 1.16, the Nether Update, introduced significant changes to the Nether dimension. However, in late 2024, Mojang announced a shift in their update strategy; rather than releasing large updates annually, they opted for a more frequent release schedule with smaller, incremental updates, stating, "We know that you want new Minecraft content more often." The Bedrock Edition has also received regular updates, now matching the themes of the Java Edition updates. Other versions of the game, such as various console editions and the Pocket Edition, were either merged into Bedrock or discontinued and have not received further updates. On 7 May 2019, coinciding with Minecraft's 10th anniversary, a JavaScript recreation of an old 2009 Java Edition build named Minecraft Classic was made available to play online for free. On 16 April 2020, a Bedrock Edition-exclusive beta version of Minecraft, called Minecraft RTX, was released by Nvidia. It introduced physically-based rendering, real-time path tracing, and DLSS for RTX-enabled GPUs. The public release was made available on 8 December 2020. Path tracing can only be enabled in supported worlds, which can be downloaded for free via the in-game Minecraft Marketplace, with a texture pack from Nvidia's website, or with compatible third-party texture packs. It cannot be enabled by default with any texture pack on any world. Initially, Minecraft RTX was affected by many bugs, display errors, and instability issues. On 22 March 2025, a new visual mode called Vibrant Visuals, an optional graphical overhaul similar to Minecraft RTX, was announced. It promises modern rendering features—such as dynamic shadows, screen space reflections, volumetric fog, and bloom—without the need of RTX-capable hardware. Vibrant Visuals was released as a part of the Chase the Skies update on 17 June 2025 for Bedrock Edition and is planned to release on Java Edition at a later date. Development began for the original edition of Minecraft—then known as Cave Game, and now known as the Java Edition—in May 2009,[k] and ended on 13 May, when Persson released a test video on YouTube of an early version of the game, dubbed the "Cave game tech test" or the "Cave game tech demo". The game was named Minecraft: Order of the Stone the next day, after a suggestion made by a player. "Order of the Stone" came from the webcomic The Order of the Stick, and "Minecraft" was chosen "because it's a good name". The title was later shortened to just Minecraft, omitting the subtitle. Persson completed the game's base programming over a weekend in May 2009, and private testing began on TigIRC on 16 May. The first public release followed on 17 May 2009 as a developmental version shared on the TIGSource forums. Based on feedback from forum users, Persson continued updating the game. This initial public build later became known as Classic. Further developmental phases—dubbed Survival Test, Indev, and Infdev—were released throughout 2009 and 2010. The first major update, known as Alpha, was released on 30 June 2010. At the time, Persson was still working a day job at jAlbum but later resigned to focus on Minecraft full-time as sales of the alpha version surged. Updates were distributed automatically, introducing new blocks, items, mobs, and changes to game mechanics such as water flow. With revenue generated from the game, Persson founded Mojang, a video game studio, alongside former colleagues Jakob Porser and Carl Manneh. On 11 December 2010, Persson announced that Minecraft would enter its beta phase on 20 December. He assured players that bug fixes and all pre-release updates would remain free. As development progressed, Mojang expanded, hiring additional employees to work on the project. The game officially exited beta and launched in full on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced the hiring of the developers behind Bukkit, a popular developer API for Minecraft servers, to improve Minecraft's support of server modifications. This move included Mojang taking apparent ownership of the CraftBukkit server mod, though this apparent acquisition later became controversial, and its legitimacy was questioned due to CraftBukkit's open-source nature and licensing under the GNU General Public License and Lesser General Public License. In August 2011, Minecraft: Pocket Edition was released as an early alpha for the Xperia Play via the Android Market, later expanding to other Android devices on 8 October 2011. The iOS version followed on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. Unlike Java Edition, Pocket Edition initially focused on Minecraft's creative building and basic survival elements but lacked many features of the PC version. Bergensten confirmed on Twitter that the Pocket Edition was written in C++ rather than Java, as iOS does not support Java. On 10 December 2014, a port of Pocket Edition was released for Windows Phone 8.1. In July 2015, a port of the Pocket Edition to Windows 10 was released as the Windows 10 Edition, with full crossplay to other Pocket versions. In January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 20 September 2017, with the "Better Together Update", the Pocket Edition was ported to the Xbox One, and was renamed to the Bedrock Edition. The console versions of Minecraft debuted with the Xbox 360 edition, developed by 4J Studios and released on 9 May 2012. Announced as part of the Xbox Live Arcade NEXT promotion, this version introduced a redesigned crafting system, a new control interface, in-game tutorials, split-screen multiplayer, and online play via Xbox Live. Unlike the PC version, its worlds were finite, bordered by invisible walls. Initially, the Xbox 360 version resembled outdated PC versions but received updates to bring it closer to Java Edition before eventually being discontinued. The Xbox One version launched on 5 September 2014, featuring larger worlds and support for more players. Minecraft expanded to PlayStation platforms with PlayStation 3 and PlayStation 4 editions released on 17 December 2013 and 4 September 2014, respectively. Originally planned as a PS4 launch title, it was delayed before its eventual release. A PlayStation Vita version followed in October 2014. Like the Xbox versions, the PlayStation editions were developed by 4J Studios. Nintendo platforms received Minecraft: Wii U Edition on 17 December 2015, with a physical release in North America on 17 June 2016 and in Europe on 30 June. The Nintendo Switch version launched via the eShop on 11 May 2017. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition, based on the Pocket Edition, would be available for download immediately after the livestream, and a physical copy available on a later date. The game is compatible only with the New Nintendo 3DS or New Nintendo 2DS XL systems and does not work with the original 3DS or 2DS systems. On 20 September 2017, the Better Together Update introduced Bedrock Edition across Xbox One, Windows 10, VR, and mobile platforms, enabling cross-play between these versions. Bedrock Edition later expanded to Nintendo Switch and PlayStation 4, with the latter receiving the update in December 2019, allowing cross-platform play for users with a free Xbox Live account. The Bedrock Edition released a native version for PlayStation 5 on 22 October 2024, while the Xbox Series X/S version launched on 17 June 2025. On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as "Legacy Console Editions". On 15 January 2019, the New Nintendo 3DS version of Minecraft received its final update, effectively becoming discontinued as well. An educational version of Minecraft, designed for use in schools, launched on 1 November 2016. It is available on Android, ChromeOS, iPadOS, iOS, MacOS, and Windows. On 20 August 2018, Mojang announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that it would be operated by JD.com in China. On 26 June 2020, a public beta for the Education Edition was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020. On 20 May 2016, China Edition (also known as My World) was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile versions are based on the Bedrock Edition. The edition is free-to-play and had over 700 million registered accounts by September 2023. This version of Bedrock Edition is exclusive to Microsoft's Windows 10 and Windows 11 operating systems. The beta release for Windows 10 launched on the Windows Store on 29 July 2015. After nearly a year and a half in beta, Microsoft fully released the version on 19 December 2016. Called the "Ender Update", this release implemented new features to this version of Minecraft like world templates and add-on packs. On 7 June 2022, the Java and Bedrock Editions of Minecraft were merged into a single bundle for purchase on Windows; those who owned one version would automatically gain access to the other version. Both game versions would otherwise remain separate. Around 2011, prior to Minecraft's full release, Mojang collaborated with The Lego Group to create a Lego brick-based Minecraft game called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1×1 block to account for larger pieces typically used in Lego sets. Persson worked on an early version called "Project Rex Kwon Do", named after the character of the same name from the film Napoleon Dynamite. Although Lego approved the project and Mojang assigned two developers for six months, it was canceled due to the Lego Group's demands, according to Mojang's Daniel Kaplan. Lego considered buying Mojang to complete the game, but when Microsoft offered over $2 billion for the company, Lego stepped back, unsure of Minecraft's potential. On 26 June 2025, a build of Brickcraft dated 28 June 2012 was published on a community archive website Omniarchive. Initially, Markus Persson planned to support the Oculus Rift with a Minecraft port. However, after Facebook acquired Oculus in 2013, he abruptly canceled the plans, stating, "Facebook creeps me out." In 2016, a community-made mod, Minecraft VR, added VR support for Java Edition, followed by Vivecraft for HTC Vive. Later that year, Microsoft introduced official Oculus Rift support for Windows 10 Edition, leading to the discontinuation of the Minecraft VR mod due to trademark complaints. Vivecraft was endorsed by Minecraft VR contributors for its Rift support. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 Bedrock version would receive PlayStation VR support later that month. In September 2024, the Minecraft team announced they would no longer support PlayStation VR, which received its final update in March 2025. Music and sound design Minecraft's music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. To create the sound effects for the game, Rosenfeld made extensive use of Foley techniques. On learning the processes for the game, he remarked, "Foley's an interesting thing, and I had to learn its subtleties. Early on, I wasn't that knowledgeable about it. It's a whole trial-and-error process. You just make a sound and eventually you go, 'Oh my God, that's it! Get the microphone!' There's no set way of doing anything at all." He reminisced on creating the in-game sound for grass blocks, stating "It turns out that to make grass sounds you don't actually walk on grass and record it, because grass sounds like nothing. What you want to do is get a VHS, break it apart, and just lightly touch the tape." According to Rosenfeld, his favorite sound to design for the game was the hisses of spiders. He elaborates, "I like the spiders. Recording that was a whole day of me researching what a spider sounds like. Turns out, there are spiders that make little screeching sounds, so I think I got this recording of a fire hose, put it in a sampler, and just pitched it around until it sounded like a weird spider was talking to you." Many of the sound design decisions by Rosenfeld were done accidentally or spontaneously. The creeper notably lacks any specific noises apart from a loud fuse-like sound when about to explode; Rosenfeld later recalled "That was just a complete accident by Markus and me [sic]. We just put in a placeholder sound of burning a matchstick. It seemed to work hilariously well, so we kept it." On other sounds, such as those of the zombie, Rosenfeld remarked, "I actually never wanted the zombies so scary. I intentionally made them sound comical. It's nice to hear that they work so well [...]." Rosenfeld remarked that the sound engine was "terrible" to work with, remembering "If you had two song files at once, it [the game engine] would actually crash. There were so many more weird glitches like that the guys never really fixed because they were too busy with the actual game and not the sound engine." The background music in Minecraft consists of instrumental ambient music. To compose the music of Minecraft, Rosenfeld used the package from Ableton Live, along with several additional plug-ins. Speaking on them, Rosenfeld said "They can be pretty much everything from an effect to an entire orchestra. Additionally, I've got some synthesizers that are attached to the computer. Like a Moog Voyager, Dave Smith Prophet 08 and a Virus TI." On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which included the music that was added in a 2013 "Music Update" for the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. On 14 August 2020, Ghostly released Volume Beta on CD and vinyl, with alternate color LPs and lenticular cover pressings released in limited quantities. The final update Rosenfeld worked on was 2018's 1.13 Update Aquatic. His music remained the only music in the game until 2020's "Nether Update", introducing pieces from Lena Raine. Since then, other composers have made contributions, including Kumi Tanioka, Samuel Åberg, Aaron Cherof, and Amos Roddy, with Raine remaining as the new primary composer. Ownership of all music besides Rosenfeld's independently released albums has been retained by Microsoft, with their label publishing all of the other artists' releases. Gareth Coker also composed some of the music for the game's mini games from the Legacy Console editions. Rosenfeld had stated his intent to create a third album of music for the game in a 2015 interview with Fact, and confirmed its existence in a 2017 tweet, stating that his work on the record as of then had tallied up to be longer than the previous two albums combined, which in total clocks in at over 3 hours and 18 minutes. However, due to licensing issues with Microsoft, the third volume has since not seen release. On 8 January 2021, Rosenfeld was asked in an interview with Anthony Fantano whether or not there was still a third volume of his music intended for release. Rosenfeld responded, saying, "I have something—I consider it finished—but things have become complicated, especially as Minecraft is now a big property, so I don't know." Reception Minecraft has received critical acclaim, with praise for the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have expressed enjoyment in Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste". A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock Paper Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly. The Xbox One Edition was one of the best received ports, being praised for its relatively large worlds. The PlayStation 3 Edition also received generally favorable reviews, being compared to the Xbox 360 Edition and praised for its well-adapted controls. The PlayStation 4 edition was the best received port to date, being praised for having 36 times larger worlds than the PlayStation 3 edition and described as nearly identical to the Xbox One edition. The PlayStation Vita Edition received generally positive reviews from critics but was noted for its technical limitations. The Wii U version received generally positive reviews from critics but was noted for a lack of GamePad integration. The 3DS version received mixed reviews, being criticized for its high price, technical issues, and lack of cross-platform play. The Nintendo Switch Edition received fairly positive reviews from critics, being praised, like other modern ports, for its relatively larger worlds. Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content. Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. As of 10 October 2014[update], the game had sold 17 million copies on PC, becoming the best-selling PC game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 700 million registered accounts by September 2023. By 2023, the game had sold over 300 million copies. As of April 2025, Minecraft has sold over 350 million copies. The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. As of 4 April 2014[update], the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 Edition sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. As of 2022, the Vita version has sold over 1.65 million physical copies in Japan, making it the best-selling Vita game in the country. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter. The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million. In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013, it was nominated as the family game of the year at the British Academy Video Games Awards. During the 16th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated the Xbox 360 version of Minecraft for "Strategy/Simulation Game of the Year". Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer's The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list. Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards. Minecraft also won "Stream Game of the Year" at inaugural Streamer Awards in 2021. The game later garnered a Nickelodeon Kids' Choice Award nomination for Favorite Video Game in 2021, and won the same category in 2022 and 2023. At the Golden Joystick Awards 2025, it won the Still Playing Award - PC and Console. Minecraft has been subject to several notable controversies. In June 2014, Mojang announced that it would begin enforcing the portion of Minecraft's end-user license agreement (EULA) which prohibits servers from giving in-game advantages to players in exchange for donations or payments. Spokesperson Owen Hill stated that servers could still require players to pay a fee to access the server and could sell in-game cosmetic items. The change was supported by Persson, citing emails he received from parents of children who had spent hundreds of dollars on servers. The Minecraft community and server owners protested, arguing that the EULA's terms were more broad than Mojang was claiming, that the crackdown would force smaller servers to shut down for financial reasons, and that Mojang was suppressing competition for its own Minecraft Realms subscription service. The controversy contributed to Notch's decision to sell Mojang. In 2020, Mojang announced an eventual change to the Java Edition to require a login from a Microsoft account rather than a Mojang account, the latter of which would be sunsetted. This also required Java Edition players to create Xbox network Gamertags. Mojang defended the move to Microsoft accounts by saying that improved security could be offered, including two-factor authentication, blocking cyberbullies in chat, and improved parental controls. The community responded with intense backlash, citing various technical difficulties encountered in the process and how account migration would be mandatory, even for those who do not play on servers. As of 10 March 2022, Microsoft required that all players migrate in order to maintain access the Java Edition of Minecraft. Mojang announced a deadline of 19 September 2023 for account migration, after which all legacy Mojang accounts became inaccessible and unable to be migrated. In June 2022, Mojang added a player-reporting feature in Java Edition. Players could report other players on multiplayer servers for sending messages prohibited by the Xbox Live Code of Conduct; report categories included profane language,[l] substance abuse, hate speech, threats of violence, and nudity. If a player was found to be in violation of Xbox Community Standards, they would be banned from all servers for a specific period of time or permanently. The update containing the report feature (1.19.1) was released on 27 July 2022. Mojang received substantial backlash and protest from community members, one of the most common complaints being that banned players would be forbidden from joining any server, even private ones. Others took issue to what they saw as Microsoft increasing control over its player base and exercising censorship, leading some to start a hashtag #saveminecraft and dub the version "1.19.84", a reference to the dystopian novel Nineteen Eighty-Four. The "Mob Vote" was an online event organized by Mojang in which the Minecraft community voted between three original mob concepts; initially, the winning mob was to be implemented in a future update, while the losing mobs were scrapped, though after the first mob vote this was changed, and losing mobs would now have a chance to come to the game in the future. The first Mob Vote was held during Minecon Earth 2017 and became an annual event starting with Minecraft Live 2020. The Mob Vote was often criticized for forcing players to choose one mob instead of implementing all three, causing divisions and flaming within the community, and potentially allowing internet bots and Minecraft content creators with large fanbases to conduct vote brigading. The Mob Vote was also blamed for a perceived lack of new content added to Minecraft since Microsoft's acquisition of Mojang in 2014. The 2023 Mob Vote featured three passive mobs—the crab, the penguin, and the armadillo—with voting scheduled to start on 13 October. In response, a Change.org petition was created on 6 October, demanding that Mojang eliminate the Mob Vote and instead implement all three mobs going forward. The petition received approximately 445,000 signatures by 13 October and was joined by calls to boycott the Mob Vote, as well as a partially tongue-in-cheek "revolutionary" propaganda campaign in which sympathizers created anti-Mojang and pro-boycott posters in the vein of real 20th century propaganda posters. Mojang did not release an official response to the boycott, and the Mob Vote otherwise proceeded normally, with the armadillo winning the vote. In September 2024, as part of a blog post detailing their future plans for Minecraft's development, Mojang announced the Mob Vote would be retired. Cultural impact In September 2019, The Guardian classified Minecraft as the best video game of the 21st century to date, and in November 2019, Polygon called it the "most important game of the decade" in its 2010s "decade in review". In June 2020, Minecraft was inducted into the World Video Game Hall of Fame. Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development. Social media sites such as YouTube, Facebook, and Reddit have played a significant role in popularizing Minecraft. Research conducted by the Annenberg School for Communication at the University of Pennsylvania showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. The game would go on to be a prominent fixture within YouTube's gaming scene during the entire 2010s; in 2014, it was the second-most searched term on the entire platform. By 2018, it was still YouTube's biggest game globally. Some popular commentators have received employment at Machinima, a now-defunct gaming video company that owned a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at Minecon 2011 had the highest attendance. Another well-known YouTube personality is Jordan Maron, known online as CaptainSparklez, who has also created many Minecraft music parodies, including "Revenge", a parody of Usher's "DJ Got Us Fallin' in Love". Minecraft's popularity on YouTube was described by Polygon as quietly dominant, although in 2019, thanks in part to PewDiePie's playthroughs of the game, Minecraft experienced a visible uptick in popularity on the platform. Longer-running series include Far Lands or Bust, dedicated to reaching the obsolete "Far Lands" glitch by foot on an older version of the game. YouTube announced that on 14 December 2021 that the total amount of Minecraft-related views on the website had exceeded one trillion. Minecraft has been referenced by other video games, such as Torchlight II, Team Fortress 2, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, and FTL: Faster Than Light. Minecraft is officially represented in downloadable content for the crossover fighter Super Smash Bros. Ultimate, with Steve as a playable character with a moveset including references to building, crafting, and redstone, alongside an Overworld-themed stage. It was also referenced by electronic music artist Deadmau5 in his performances. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. In 2025, A Minecraft Movie was released. It made $313 million in the box office in the first week, a record-breaking opening for a video game adaptation. Minecraft has been noted as a cultural touchstone for Generation Z, as many of the generation's members played the game at a young age. The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design (CAD) and education. In a panel at Minecon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap. In September 2012, Mojang began the Block by Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements and is in the planning phase. The Block by Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions. In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at 171 meters (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft was around 192 meters above in-game sea level when the project was completed. Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders has used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people. Despite its unpredictable nature, Minecraft speedrunning, where players time themselves from spawning into a new world to reaching The End and defeating the Ender Dragon boss, is popular. Some speedrunners use a combination of mods, external programs, and debug menus, while other runners play the game in a more vanilla or more consistency-oriented way. Minecraft has been used in educational settings through initiatives such as MinecraftEdu, founded in 2011 to make the game affordable and accessible for schools in collaboration with Mojang. MinecraftEdu provided features allowing teachers to monitor student progress, including screenshot submissions as evidence of lesson completion, and by 2012 reported that approximately 250,000 students worldwide had access to the platform. Mojang also developed Minecraft: Education Edition with pre-built lesson plans for up to 30 students in a closed environment. Educators have used Minecraft to teach subjects such as history, language arts, and science through custom-built environments, including reconstructions of historical landmarks and large-scale models of biological structures such as animal cells. The introduction of redstone blocks enabled the construction of functional virtual machines such as a hard drive and an 8-bit computer. Mods have been created to use these mechanics for teaching programming. In 2014, the British Museum announced a project to reproduce its building and exhibits in Minecraft in collaboration with the public. Microsoft and Code.org have offered Minecraft-based tutorials and activities designed to teach programming, reporting by 2018 that more than 85 million children had used their resources. In 2025, the Musée de Minéralogie in Paris held a temporary exhibition titled "Minerals in Minecraft." Following the initial surge in popularity of Minecraft in 2010, other video games were criticised for having various similarities to Minecraft, and some were described as being "clones", often due to a direct inspiration from Minecraft, or a superficial similarity. Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, BlockWorld 3D, Total Miner, and Luanti (formerly Minetest). David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system. In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms not to officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded, with official Minecraft releases on Nintendo consoles eventually resuming. Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011. In 2025, Persson announced through a poll on his X account that he was considering developing a spiritual successor to Minecraft. He later clarified that he was "100% serious", and that he had "basically announced Minecraft 2". Within days, however, Persson cancelled the plans after speaking to his team. In November 2024, artificial intelligence companies Decart and Etched released Oasis, an artificially generated version of Minecraft, as a proof of concept. Every in-game element is completely AI-generated in real time and the model does not store world data, leading to "hallucinations" such as items and blocks appearing that were not there before. In January 2026, indie game developer Unomelon announced that their voxel sandbox game Allumeria would be playable in Steam Next Fest that year. On 10 February, Mojang issued a DMCA takedown of Allumeria on Steam through Valve, alleging the game was infringing on Minecraft's copyright. Some reports suggested that the takedown may have used an automatic AI copyright claiming service. The DMCA was later withdrawn. Minecon was an annual official fan convention dedicated to Minecraft. The first full Minecon was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community. In 2016, Minecon was held in-person for the last time, with the following years featuring annual "Minecon Earth" livestreams on minecraft.net and YouTube instead. These livestreams, later rebranded to "Minecraft Live", included the mob/biome votes, and announcements of new game updates. In 2025, "Minecraft Live" became a biannual event as part of Minecraft's changing update schedule.[citation needed] Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Buffalo,_New_York] | [TOKENS: 11428]
Contents Buffalo, New York Buffalo is a city in the U.S. state of New York. It lies in Western New York at the eastern end of Lake Erie and at the head of the Niagara River on the Canada–United States border. It is the second-most populous city in New York, with a population of 278,349 at the 2020 census. The Buffalo–Niagara Falls metropolitan area, with over 1.16 million residents, is the 51st-largest metropolitan area in the United States. Buffalo is the county seat of Erie County. Before the 17th century, the region was inhabited by nomadic Paleo-Indians who were succeeded by the Neutral, Erie, and Iroquois nations. In the early 17th century, the French began to explore the region. In the 18th century, Iroquois land surrounding Buffalo Creek was ceded through the Holland Land Purchase, and a small village was established at its headwaters. Buffalo was selected as the terminus of the Erie Canal in 1825, which led to its incorporation in 1832 and stimulated its growth as the primary inland port between the Great Lakes and Atlantic Ocean. Transshipment made Buffalo the world's largest grain port in that era. After the coming of railroads greatly reduced the canal's importance, the city became the second-largest railway hub (after Chicago), and the city came to be dominated by steel production by the 20th century. Later, deindustrialization and the opening of the St. Lawrence Seaway saw the city's economy decline and diversify. It developed its service industries, such as health care, retail, tourism, logistics, and education, while retaining some manufacturing. In 2019, the gross domestic product of the Buffalo–Niagara Falls metropolitan area was $53 billion (~$64 billion in 2024). The city's cultural landmarks include the oldest urban parks system in the United States, the Buffalo AKG Art Museum, the Buffalo History Museum, the Buffalo Philharmonic Orchestra, Shea's Performing Arts Center, the Buffalo Museum of Science, and several annual festivals. Its educational institutions include the University at Buffalo, Buffalo State University, Canisius University, and D'Youville University. Buffalo is also known for its winter weather, Buffalo wings, and two major-league sports teams: the National Football League's Buffalo Bills and the National Hockey League's Buffalo Sabres. History Before the arrival of Europeans, nomadic Paleo-Indians inhabited the western New York region from the 8th millennium BCE. The Woodland period began around 1000 BC, marked by the rise of the Iroquois Confederacy and the spread of its tribes throughout the state. Seventeenth-century Jesuit missionaries were the first Europeans to visit the area. During French exploration of the region in 1620, the region was sparsely populated and occupied by the agrarian Erie people in the south and the Neutral Nation in the north, with a relatively small tribe, the Wenrohronon, between and the Senecas, an Iroquois tribe, occupying the land just east of the region. The Neutral grew tobacco and hemp to trade with the Iroquois, who traded furs with the French for European goods. The tribes used animal- and war paths to travel and move goods across what today is New York State. (Centuries later, these same paths were gradually improved, then paved, then developed into major modern roads.) Traditional Seneca oral legends, as recounted by professional storytellers known as Hagéotâ, were highly participatory. These tales were told only during winter, as they were believed to have the power to put even animals and plants to sleep, which could affect the harvest. At the conclusion, audience members typically offered gifts, such as tobacco, to the storyteller as a sign of appreciation. During the Beaver Wars in the mid-17th century the Senecas conquered the Erie and Neutrals in the region. Native Americans did not settle along Buffalo Creek permanently until 1780, when displaced Senecas were relocated from Fort Niagara. The Seneca town of Došowëh, meaning "Between the basswoods," was historically located on Buffalo Creek, and Došowëh continues to be used as the Seneca name for the modern city of Buffalo. Louis Hennepin and Sieur de La Salle explored the upper Niagara and Ontario regions in the late 1670s. In 1679, La Salle's ship, Le Griffon, became the first to sail above Niagara Falls near Cayuga Creek. Baron de Lahontan visited the site of Buffalo in 1687. A small French settlement along Buffalo Creek lasted for only a year (1758). After the French and Indian War, the region was ruled by Britain. After the American Revolution, the Province of New York—now a U.S. state—began westward expansion, looking for arable land by following the Iroquois. New York and Massachusetts were vying for the territory which included Buffalo, and Massachusetts had the right to purchase all but a one-mile-(1600-meter)-wide portion of land. The rights to the Massachusetts territories were sold to Robert Morris in 1791. Despite objections from Seneca chief Red Jacket, Morris brokered a deal between fellow chief Cornplanter and the Dutch dummy corporation Holland Land Company.[a] The Holland Land Purchase gave the Senecas three reservations, and the Holland Land Company received 4,000,000 acres (16,000 km2) for about thirty-three cents per acre. Permanent white settlers along the creek were prisoners captured during the Revolutionary War. Early landowners were Iroquois interpreter Captain William Johnston, former enslaved man Joseph "Black Joe" Hodges and Cornelius Winney, a Dutch trader who arrived in 1789. As a result of the war, in which the Iroquois sided with the British Army, Iroquois territory was gradually reduced in the late 1700s by European settlers through successive statewide treaties which included the Treaty of Fort Stanwix (1784) and the First Treaty of Buffalo Creek (1788). The Iroquois were moved onto reservations, including Buffalo Creek. By the end of the 18th century, only 338 mi2 (216,000 acres; 880 km2; 88,000 ha) of reservations remained. After the Treaty of Big Tree removed Iroquois title to lands west of the Genesee River in 1797, Joseph Ellicott surveyed land at the mouth of Buffalo Creek. In the middle of the village was an intersection of eight streets at present-day Niagara Square. Originally named New Amsterdam, its name was soon changed to Buffalo. The village of Buffalo was named for Buffalo Creek.[b] British military engineer John Montresor referred to "Buffalo Creek" in his 1764 journal, the earliest recorded appearance of the name. A road to Pennsylvania from Buffalo was built in 1802 for migrants traveling to the Connecticut Western Reserve in Ohio. Before an east–west turnpike across the state was completed, traveling from Albany to Buffalo would take a week; a trip from nearby Williamsville to Batavia could take over three days.[c] British forces burned Buffalo and the northwestern village of Black Rock in 1813. The battle and subsequent fire was in response to the destruction of Niagara-on-the-Lake by American forces and other skirmishes during the War of 1812. Rebuilding was swift, completed in 1815. As a remote outpost, village residents hoped that the proposed Erie Canal would bring prosperity to the area. To accomplish this, Buffalo's harbor was expanded with the help of Samuel Wilkeson; it was selected as the canal's terminus over the rival Black Rock. It opened in 1825, ushering in commerce, manufacturing and hydropower. By the following year, the 130 sq mi (340 km2) Buffalo Creek Reservation (at the western border of the village) was transferred to Buffalo. Buffalo was incorporated as a city in 1832. During the 1830s, businessman Benjamin Rathbun significantly expanded its business district. The city doubled in size from 1845 to 1855. Almost two-thirds of the city's population was foreign-born, largely a mix of unskilled (or educated) Irish and German Catholics. Fugitive slaves made their way north to Buffalo during the 1840s. Buffalo was a terminus of the Underground Railroad, with many free Black people crossing the Niagara River to Fort Erie, Ontario; others remained in Buffalo. During this time, Buffalo's port continued to develop. Passenger and commercial traffic expanded, leading to the creation of feeder canals and the expansion of the city's harbor. Unloading grain in Buffalo was a laborious job, and grain handlers working on lake freighters would make $1.50 a day (equivalent to $52 in 2025) in a six-day work week. Local inventor Joseph Dart and engineer Robert Dunbar created the grain elevator in 1843, adapting the steam-powered elevator. Dart's Elevator initially processed one thousand bushels per hour, speeding global distribution to consumers. Buffalo was the transshipment hub of the Great Lakes, and weather, maritime and political events in other Great Lakes cities had a direct impact on the city's economy. In addition to grain, Buffalo's primary imports included agricultural products from the Midwest (meat, whiskey, lumber and tobacco), and its exports included leather, ships and iron products. The mid-19th century saw the rise of new manufacturing capabilities, particularly with iron. By the 1860s, many railroads terminated in Buffalo; they included the Buffalo, Bradford and Pittsburgh Railroad, Buffalo and Erie Railroad, the New York Central Railroad, and the Lehigh Valley Railroad. During this time, Buffalo controlled one-quarter of all shipping traffic on Lake Erie. After the Civil War, canal traffic began to drop as railroads expanded into Buffalo. Unionization began to take hold in the late 19th century, highlighted by the Great Railroad Strike of 1877 and 1892 Buffalo switchmen's strike. At the start of the 20th century, Buffalo was the world's leading grain port and a national flour-milling hub. Local mills were among the first to benefit from hydroelectricity generated by the Niagara River. Buffalo hosted the 1901 Pan-American Exposition after the Spanish–American War, showcasing the nation's advances in art, architecture, and electricity. Its centerpiece was the Electric Tower, with over two million light bulbs, but some exhibits were jingoistic and racially charged. At the exposition, President William McKinley was assassinated by anarchist Leon Czolgosz. When McKinley died, Theodore Roosevelt was sworn in at the Wilcox Mansion in Buffalo. Attorney John Milburn and local industrialists convinced the Lackawanna Iron and Steel Company to relocate from Scranton, Pennsylvania to the town of West Seneca in 1904. Employment was competitive, with many Eastern Europeans and Scrantonians vying for jobs. From the late 19th century to the 1920s, mergers and acquisitions led to distant ownership of local companies; this had a negative effect on the city's economy. Examples include the acquisition of Lackawanna Steel by Bethlehem Steel and, later, the relocation of Curtiss-Wright in the 1940s. The Great Depression saw severe unemployment, especially among the working class. New Deal relief programs operated in full force, and the city became a stronghold of labor unions and the Democratic Party. During World War II, Buffalo regained its manufacturing strength as military contracts enabled the city to manufacture steel, chemicals, aircraft, trucks and ammunition. The 15th-most-populous US city in 1950, Buffalo's economy relied almost entirely on manufacturing; eighty percent of area jobs were in the sector. The city also had over a dozen railway terminals, as railroads remained a significant industry. The St. Lawrence Seaway was proposed in the 19th century as a faster shipping route to Europe, and later as part of a bi-national hydroelectric project with Canada. Its combination with an expanded Welland Canal led to a grim outlook for Buffalo's economy. After its 1959 opening, the city's port and barge canal became largely irrelevant. Shipbuilding in Buffalo wound down in the 1960s due to reduced waterfront activity, ending an industry which had been part of the city's economy since 1812. Downsizing of the steel mills was attributed to the threat of higher wages and unionization efforts. Racial tensions culminated in riots in 1967. Suburbanization led to the selection of the town of Amherst for the new University at Buffalo campus by 1970. Unwilling to modernize its plant, Bethlehem Steel began cutting thousands of jobs in Lackawanna during the mid-1970s before closing it in 1983. The region lost at least 70,000 jobs between 1970 and 1984. Like much of the Rust Belt, Buffalo has focused on recovering from the effects of late-20th-century deindustrialization. Geography Buffalo is on the eastern end of Lake Erie opposite Fort Erie, Ontario. It is at the head of the Niagara River, which flows north over Niagara Falls into Lake Ontario. The Buffalo metropolitan area is on the Erie/Ontario Lake Plain of the Eastern Great Lakes Lowlands, a narrow plain extending east to Utica, New York. The city is generally flat, except for elevation changes in the University Heights and Fruit Belt neighborhoods. The Southtowns are hillier, leading to the Cattaraugus Hills in the Appalachian Upland. Several types of shale, limestone and lagerstätten are prevalent in Buffalo and its surrounding area, lining their stream beds. According to Fox Weather, Buffalo is one of the top five snowiest large cities in the country, receiving, on average, 95 inches of snow annually. Although the city has not experienced any recent or significant earthquakes, Buffalo is in the Southern Great Lakes Seismic Zone (part of the Great Lakes tectonic zone). Buffalo has four channels within its boundaries: the Niagara River, Buffalo River (and Creek), Scajaquada Creek, and the Black Rock Canal, adjacent to the Niagara River. The city's Bureau of Forestry maintains a database of over seventy thousand trees. According to the United States Census Bureau, Buffalo has an area of 52.5 sq mi (136 km2); 40.38 sq mi (104.6 km2) is land, and the rest is water. The city's total area is 22.66 percent water. In 2010, its population density was 6,470.6 per square mile. Buffalo's architecture is diverse, with a collection of 19th- and 20th-century buildings. Downtown Buffalo landmarks include Louis Sullivan's Guaranty Building, an early skyscraper; the Ellicott Square Building, once one of the largest of its kind in the world; the Art Deco Buffalo City Hall and the McKinley Monument, and the Electric Tower. Beyond downtown, the Buffalo Central Terminal was built in the Broadway-Fillmore neighborhood in 1929; the Richardson Olmsted Complex, built in 1881, was an insane asylum until its closure in the 1970s. Urban renewal from the 1950s to the 1970s spawned the Brutalist-style Buffalo City Court Building and Seneca One Tower, the city's tallest building. In the city's Parkside neighborhood, the Darwin D. Martin House was designed by Frank Lloyd Wright in his Prairie School style. Since 2016, Washington DC real estate developer Douglas Jemal has been acquiring, and redeveloping, iconic properties throughout the city. According to Mark Goldman, the city has a "tradition of separate and independent settlements". The boundaries of Buffalo's neighborhoods have changed over time. The city is divided into five districts, each containing several neighborhoods, for a total of thirty-five neighborhoods. Main Street divides Buffalo's east and west sides, and the west side was fully developed earlier. This division is seen in architectural styles, street names, neighborhood and district boundaries, demographics, and socioeconomic conditions; Buffalo's West Side is generally more affluent than its East Side. Several neighborhoods in Buffalo have had increased investment since the 1990s, beginning with the Elmwood Village. The 2002 redevelopment of the Larkin Terminal Warehouse led to the creation of Larkinville, home to several mixed-use projects and anchored by corporate offices. Downtown Buffalo and its central business district (CBD) had a 10.6-percent increase in residents from 2010 to 2017, as over 1,061 housing units became available; the Seneca One Tower was redeveloped in 2020. Other revitalized areas include Chandler Street, in the Grant-Amherst neighborhood, and Hertel Avenue in Parkside. The Buffalo Common Council adopted its Green Code in 2017, replacing zoning regulations which were over sixty years old. Its emphasis on regulations promoting pedestrian safety and mixed land use received an award at the 2019 Congress for the New Urbanism conference. Buffalo has a humid continental climate (Köppen: Dfa), and temperatures have been warming with the rest of the US. Lake-effect snow is characteristic of Buffalo winters, with snow bands (producing intense snowfall in the city and surrounding area) depending on wind direction off Lake Erie. However, Buffalo is rarely the snowiest city in the state. The Blizzard of 1977 resulted from a combination of high winds and snow which accumulated on land and on the frozen Lake Erie. Although snow does not typically impair the city's operation, it can cause significant damage in autumn (as the October 2006 storm did). In November 2014 (called "Snowvember"), the region had a record-breaking storm which produced over 5+1⁄2 ft (66 in; 170 cm) of snow. Buffalo's lowest recorded temperature was −20 °F (−29 °C), which occurred twice: on February 9, 1934, and February 2, 1961. Although the city's summers are drier and sunnier than other cities in the northeastern United States, its vegetation receives enough precipitation to remain hydrated. Buffalo summers are characterized by abundant sunshine, with moderate humidity and temperatures; the city benefits from cool, southwestern Lake Erie summer breezes which temper warmer temperatures. Temperatures rise above 90 °F (32.2 °C) an average of three times a year. Buffalo is the only city with a population of over 100,000 in the contiguous United States that has not had an official recording of 100 °F (37.8 °C) or more to date, with a maximum temperature of 99 °F (37 °C) reached on August 27, 1948. Rainfall is moderate, typically falling at night, and cooler lake temperatures hinder storm development in July. August is usually rainier and muggier, as the warmer lake loses its temperature-controlling ability. Demographics Several hundred Seneca, Tuscarora and other Iroquois tribal peoples were the primary residents of the Buffalo area before 1800, concentrated along Buffalo Creek. After the Revolutionary War, settlers from New England and eastern New York began to move into the area. From the 1830s to the 1850s, they were joined by Irish and German immigrants from Europe, both peasants and working class, who settled in enclaves on the city's south and east sides. At the turn of the 20th century, Polish immigrants replaced Germans on the East Side, who moved to newer housing; Italian immigrant families settled throughout the city, primarily on the lower West Side. During the 1830s, Buffalo residents were generally intolerant of the small groups of Black Americans who began settling on the city's East Side.[g] In the 20th century, wartime and manufacturing jobs attracted Black Americans from the South during the First and Second Great Migrations. In the World War II and postwar years from 1940 to 1970, the city's Black population rose by 433 percent. They replaced most of the Polish community on the East Side, who were moving out to suburbs. However, the effects of redlining, steering, social inequality, blockbusting, white flight and other racial policies resulted in the city (and region) becoming one of the most segregated in the U.S. During the 1940s and 1950s, Puerto Rican migrants arrived en masse, also seeking industrial jobs, settling on the East Side and moving westward. In the 21st century, Buffalo is classified as a majority minority city, with a plurality of residents who are Black and Latino. Buffalo has experienced effects of urban decay since the 1970s, and also saw population loss to the suburbs and Sun Belt states, and experienced job losses from deindustrialization. The city's population peaked at 580,132 in 1950, when Buffalo was the 15th-largest city in the United States – down from the eighth-largest city in 1900, after its growth rate slowed during the 1920s. Buffalo finally saw a population gain of 6.5% in the 2020 census, reversing a decades long trend of population decline. The city has 278,349 residents as of the 2020 census, making it the 76th-most populous city in the United States. Its metropolitan area had 1.1 million residents in 2020, the country's 49th-largest. Compared to other major US metropolitan areas, the number of foreign-born immigrants to Buffalo is low. New immigrants are primarily resettled refugees (especially from war- or disaster-affected nations) and refugees who had previously settled in other U.S. cities. During the early 2000s, most immigrants came from Canada and Yemen; this shifted in the 2010s to Burmese (Karen) refugees and Bangladeshi immigrants. Between 2008 and 2016, Burmese, Somali, Bhutanese, and Iraqi Americans were the four largest ethnic immigrant groups in Erie County. A 2008 report noted that although food deserts were seen in larger cities and not in Buffalo, the city's neighborhoods of color have access only to smaller grocery stores and lack the supermarkets more typical of newer, white neighborhoods. A 2018 report noted that over fifty city blocks on Buffalo's East Side lacked adequate access to a supermarket. Health disparities exist compared to the rest of the state: Erie County's average 2019 lifespan was three years lower (78.4 years); its 17-percent smoking and 30-percent obesity rates were slightly higher than the state average. According to the Partnership for the Public Good, educational achievement in the city is lower than in the surrounding area; city residents are almost twice as likely as adults in the metropolitan area to lack a high-school diploma. During the early 19th century, Presbyterian missionaries tried to convert the Seneca people on the Buffalo Creek Reservation to Christianity. Initially resistant, some tribal members set aside their traditions and practices to form their own sect. Later, European immigrants added other faiths. Christianity is the predominant religion in Buffalo and Western New York. Catholicism (primarily the Latin Church) has a significant presence in the region, with 161 parishes and over 570,000 adherents in the Diocese of Buffalo. Major Protestant denominations in the area include Lutheran, Baptist, and Methodist. Pentecostals are also significant, and approximately 20,000 persons are non-denominational adherents.[needs update] A Jewish community began developing in the city with immigrants from the mid-1800s; about one thousand German and Lithuanian Jews settled in Buffalo before 1880. Buffalo's first synagogue, Temple Beth El, was established in 1847. The city's Temple Beth Zion is the region's largest synagogue. With changing demographics and an increased number of refugees from other areas on the city's East Side, Islam and Buddhism have expanded their presence. In this area, new residents have converted empty churches into mosques and Buddhist temples. Hinduism maintains a small, active presence in the area, including the town of Amherst. A 2016 American Bible Society survey reported that Buffalo is the fifth-least "Bible-minded" city in the United States; 13 percent of its residents associate with the Bible. Economy The Erie Canal was the impetus for Buffalo's economic growth as a transshipment hub for grain and other agricultural products headed east from the Midwest. Later, manufacturing of steel and automotive parts became central to the city's economy. When these industries downsized in the region, Buffalo's economy became service-based. Its primary sectors include health care, business services (banking, accounting, and insurance), retail, tourism and logistics, especially with Canada. Despite the loss of large-scale manufacturing, some manufacturing of metals, chemicals, machinery, food products, and electronics remains in the region. Advanced manufacturing has increased, with an emphasis on research and development (R&D) and automation. In 2019, the U.S. Bureau of Economic Analysis valued the gross domestic product (GDP) of the Buffalo–Niagara Falls MSA at $53 billion (~$64 billion in 2024). The civic sector is a major source of employment in the Buffalo area, and includes public, non-profit, healthcare and educational institutions. New York State, with over 19,000 employees, is the region's largest employer. In the private sector, top employers include the Kaleida Health and Catholic Health hospital networks and M&T Bank, the sole Fortune 500 company headquartered in the city. Most have been the top employers in the region for several decades. Buffalo is home to the headquarters of Rich Products, Delaware North and New Era Cap Company; the aerospace manufacturer Moog Inc. and toy maker Fisher-Price are based in nearby East Aurora. National Fuel Gas and Life Storage are headquartered in Williamsville, New York. Buffalo weathered the Great Recession of 2006–09 well in comparison with other U.S. cities, exemplified by increased home prices during this time. The region's economy began to improve in the early 2010s, adding over 25,000 jobs from 2009 to 2017. With state aid, Tesla, Inc.'s Giga New York plant opened in South Buffalo in 2017. The effects of the COVID-19 pandemic in the United States, however, increased the local unemployment rate to 7.5 percent by December 2020. The local unemployment rate had been 4.2 percent in 2019, higher than the national average of 3.5 percent. Culture Buffalo is home to over 20 theater companies, with many centered in the downtown Theatre District. Shea's Performing Arts Center is the city's largest theater. Designed by Louis Comfort Tiffany and built in 1926, the theater presents Broadway musicals and concerts. Shakespeare in Delaware Park has been held outdoors every summer since 1976. comedy can be found throughout the city and is anchored by Helium Comedy Club, which hosts both local talent and national touring acts. The Nickel City Opera (also known as NC Opera Buffalo and NCO) is an opera company based in Buffalo. It was founded in 2004 by Valerian Ruminski. and operated between 2009 and 2024. The Nickel City Opera|NCO has collaborated with the Buffalo Philharmonic Orchestra, has commissioned an opera and staged operatic works. Matthias Manasi was music director of Nickel City Opera from 2017 to 2021, his predecessor Michael Ching was music director of NCO from 2012 to 2017. NCO's home base is Shea's Performing Arts Center in the theater district of downtown Buffalo. Shea's Performing Arts Center was designed by the well-known Chicago firm Rapp and Rapp. The opera house was modeled in the style of European operahouses and decorated in a combination of French and Spanish Baroque and Rococo styles. The interior design was designed by the designer and artist Louis Comfort Tiffany, and many of its elements are still there today. Originally there were nearly 4,000 seats, but in the 1930s the number of seats was reduced to the current number of 3,019 seats last but not least to increase the place for the orchestra by increasing the size of the orchestra pit. The Buffalo Philharmonic Orchestra was formed in 1935 and performs at Kleinhans Music Hall, whose acoustics have been praised. Although the orchestra nearly disbanded during the late 1990s due to a lack of funding, philanthropic contributions and state aid stabilized it. Under the direction of JoAnn Falletta, the orchestra has received a number of Grammy Award nominations and won the Grammy Award for Best Contemporary Classical Composition in 2009. KeyBank Center draws national music acts year-round. Sahlen Field hosts the annual WYRK Taste of Country music festival every summer with national country music acts. Canalside regularly hosts outdoor summer concerts, a tradition that spun off from the defunct Thursday at the Square concert series. Colored Musicians Club, an extension of what was a separate musicians'-union chapter, maintains jazz history. Rick James was born and raised in Buffalo and later lived on a ranch in the nearby Town of Aurora. James formed his Stone City Band in Buffalo, and had national appeal with several crossover singles in the R&B, disco and funk genres in the late 1970s and early 1980s. Around the same time, the jazz fusion band Spyro Gyra and jazz saxophonist Grover Washington Jr. also got their start in the city. The Goo Goo Dolls, an alternative rock group which formed in 1986, had 19 top-ten singles. Singer-songwriter and activist Ani DiFranco has released over 20 folk and indie rock albums on Righteous Babe Records, her Buffalo-based label. The death metal band Cannibal Corpse is the most commercially successful in the genre; they later settled in Tampa, Florida. Underground hip-hop acts in the city partner with Buffalo-based Griselda Records, whose artists include Westside Gunn, Conway the Machine, and Benny the Butcher, who all occasionally refer to Buffalo culture in their lyrics. The city's cuisine encompasses a variety of cultures and ethnicities. In 2015, the National Geographic Society ranked Buffalo third on its "World's Top Ten Food Cities" list. Teressa Bellissimo first prepared Buffalo wings (seasoned chicken wings) at the Anchor Bar in 1964. The Anchor Bar has a crosstown rivalry with Duff's Famous Wings, but Buffalo wings are served at many bars and restaurants throughout the city (some with unique cooking styles and flavor profiles). Buffalo wings are traditionally served with blue cheese dressing and celery. In 2003, the Anchor Bar received a James Beard Foundation Award in the America's Classics category. The Buffalo area has over 600 pizzerias, estimated at more per capita than New York City. Several craft breweries began opening in the 1990s, and the city's last call is 4 am. Other mainstays of Buffalo cuisine include beef on weck, butter lambs, kielbasa, pierogi, sponge candy, chicken finger subs (including the stinger – a version that also includes steak), and the fish fry (popular any time of year, but especially during Lent). With an influx of refugees and other immigrants to Buffalo, its number of ethnic restaurants (including the West Side Bazaar kitchen incubator) has increased. Some restaurants use food trucks to serve customers, and nearly fifty food trucks appeared at Larkin Square in 2019. Buffalo was ranked the seventh-best city in the United States to visit in 2021 by Travel + Leisure, which noted the growth and potential of the city's cultural institutions. The Albright–Knox Art Gallery is a modern and contemporary art museum with a collection of more than 8,000 works, of which only two percent are on display. With a donation from Jeffrey Gundlach, a three-story addition designed by the Dutch architectural firm OMA opened June 2023 . Across the street, the Burchfield Penney Art Center contains paintings by Charles E. Burchfield and is operated by Buffalo State College. Buffalo is home to the Freedom Wall, a 2017 art installation commemorating civil-rights activists throughout history. Near both museums is the Buffalo History Museum, featuring artwork, literature and exhibits related to the city's history and major events, and the Buffalo Museum of Science is on the city's East Side. Canalside, Buffalo's historic business district and harbor, attracts more than 1.5 million visitors annually. It includes the Explore & More Children's Museum, the Buffalo and Erie County Naval & Military Park, LECOM Harborcenter, and a number of shops and restaurants. A restored 1924 carousel (now solar-powered) and a replica boathouse were added to Canalside in 2021. Other city attractions include the Theodore Roosevelt Inaugural National Historic Site, the Michigan Street Baptist Church, Buffalo RiverWorks, Seneca Buffalo Creek Casino, Buffalo Transportation Pierce-Arrow Museum, and the Nash House Museum. The National Buffalo Wing Festival is held every Labor Day at Sahlen Field. Since 2002, it has served over 4.8 million Buffalo wings and has had a total attendance of 865,000. The Taste of Buffalo is a two-day food festival held in July at Niagara Square, attracting 450,000 visitors annually. Other events include the Allentown Art Festival, the Polish-American Dyngus Day, the Elmwood Avenue Festival of the Arts, Juneteenth in Martin Luther King Jr. Park, the World's Largest Disco in October and Friendship Festival in summer, which celebrates Canada-US relations. Sports Buffalo has three major professional sports teams: the Buffalo Sabres (National Hockey League), the Buffalo Bills (National Football League), and the Buffalo Bandits (National Lacrosse League). The Bills were a founding member of the American Football League in 1960, and have played at Highmark Stadium in Orchard Park since they moved from War Memorial Stadium in 1973. They are the only NFL team based in New York State.[i] Before the Super Bowl era, the Bills won the American Football League Championship in 1964 and 1965. With mixed success throughout their history, the Bills had a close loss in Super Bowl XXV and returned to consecutive Super Bowls after the 1991, 1992, and 1993 seasons (losing each time). The Sabres, an expansion team in 1970, share KeyBank Center with the Bandits. The Bandits are the most decorated of the city's professional teams, with seven championships. The Bills, Sabres and Bandits are owned by Pegula Sports and Entertainment. Buffalo's minor-league professional teams include the Buffalo Bisons (International League), who play at Sahlen Field, and Buffalo Pro Soccer (USL Championship). Semi-professional teams include the Buffalo eXtreme (American Basketball Association), FC Buffalo (USL League Two), FC Buffalo Women (USL W League), and Buffalo Stallions (National Premier Soccer League). Several colleges and universities in the area field intercollegiate sports teams; the Buffalo Bulls and the Canisius Golden Griffins compete in NCAA Division I. The Bulls have 16 varsity sports in the Mid-American Conference (MAC); the Golden Griffins field 15 teams in the Metro Atlantic Athletic Conference (MAAC), with the men's hockey team part of the Atlantic Hockey Association (AHA). The Bulls participate in the Football Bowl Subdivision, the highest level of college football. Parks and recreation Frederick Law Olmsted described Buffalo as being "the best planned city [...] in the United States, if not the world". With encouragement from city stakeholders, he and Calvert Vaux augmented the city's grid plan by drawing inspiration from Paris and introducing landscape architecture with aspects of the countryside. Their plan would introduce a system of interconnected parks, parkways and trails, unlike the singular Central Park in New York City. The largest would be Delaware Park, across Forest Lawn Cemetery to amplify the amount of open space. With construction of the system finishing in 1876, it is regarded as the country's oldest; however, some of Olmsted's plans were never fully realized. Some parks later diminished and succumbed to diseases, highway construction, and weather events such as Lake Storm Aphid in 2006. The non-profit Buffalo Olmsted Park Conservancy was created in 2004 to help preserve the 850 acres (340 ha) of parkland. Olmsted's work in Buffalo inspired similar efforts in cities such as San Francisco, Chicago, and Boston. The city's Division of Parks and Recreation manages over 180 parks and facilities, seven recreational centers, twenty-one pools and splash pads, and three ice rinks. The 350-acre (140 ha) Delaware Park features the Buffalo Zoo, Hoyt Lake, a golf course, and playing fields. Buffalo collaborated with its sister city Kanazawa to create the park's Japanese Garden in 1970, where cherry blossoms bloom in the spring. Opening in 1976, Tifft Nature Preserve in South Buffalo is on 264 acres (107 ha) of remediated industrial land. The preserve is an Important Bird Area, including a meadow with trails for hiking and cross-country skiing, marshland and fishing. The Olmsted-designed Cazenovia and South Parks, the latter home to the Buffalo and Erie County Botanical Gardens, are also in South Buffalo. According to the Trust for Public Land, Buffalo's 2022 ParkScore ranking had high marks for access to parks, with 89 percent of city residents living within a ten-minute walk from a park. The city ranked lower in acreage, however; nine percent of city land is devoted to parks, compared with the national median of about fifteen percent.[needs update] Efforts to convert Buffalo's former industrial waterfront into recreational space have attracted national attention, with some writers comparing its appeal to that of Niagara Falls. Redevelopment of the waterfront began in the early 2000s, with the reconstruction of historically aligned canals on the site of the former Buffalo Memorial Auditorium. Placemaking initiatives would lead to the area's popularity, rather than permanent buildings and attractions. Under Mayor Byron Brown, Canalside was cited by the Brookings Institution as an example of waterfront revitalization for other U.S. cities to follow. Summer events have included paddle-boating and fitness classes, and the frozen canals permit ice skating, curling, and ice cycling in winter. Its success spurred the state to create Buffalo Harbor State Park in 2014; the park has trails, open recreation areas, bicycle paths and piers. The park's Gallagher Beach, the city's only public beach, has prohibited swimming due to high bacteria levels and other environmental concerns. The Shoreline Trail passes through Buffalo near the Outer Harbor, Centennial Park, and the Black Rock Canal. The North Buffalo–Tonawanda rail trail begins in Shoshone Park, near the LaSalle metro station in North Buffalo. Government Buffalo has a Strong mayor–council government. As the chief executive of city government, the mayor oversees the heads of the city's departments, participates in ceremonies, boards and commissions, and is as the liaison between the city and local cultural institutions. Some agencies, including utilities, urban renewal and public housing, are state- and federally-funded public benefit-corporations semi-independent of city government. Christopher Scanlon has served as acting mayor since 2024, following the resignation of Byron Brown. No Republican has been mayor of Buffalo since Chester A. Kowal in 1965. With its nine districts, the Buffalo Common Council enacts laws, levies taxes, and approves mayoral appointees and the city budget. Bryan Bollman has been the Common Council president since 2024. Generally reflecting the city's electorate, all nine councilmen are members of the Democratic Party. Buffalo is the Erie County seat, and is within five of the county's eleven legislative districts. The city is part of the Eighth Judicial District. Court cases handled at the city level include misdemeanors, violations, housing matters, and claims under $15,000; more severe cases are handled at the county level. Buffalo is represented by members of the New York State Assembly and New York State Senate. At the federal level, the city takes up most of New York's 26th congressional district and has been represented by Democrat Tim Kennedy since 2024. Federal offices in the city include the Buffalo District of the United States Army Corps of Engineers' Great Lakes and Ohio River Division, the Federal Bureau of Investigation, and the United States District Court for the Western District of New York. In 2020, the city spent $519 million (~$618 million in 2024) on the effects of the COVID-19 pandemic. The city in 2024 is hampered with a severe budget deficit attributed to the Byron Brown administration. Buffalo is served by the Buffalo Police Department. The police commissioner is Byron Lockwood, who was appointed by Mayor Byron Brown in 2018. Although some criminal activity in the city remains higher than the national average, total crimes have decreased since the 1990s; one reason may be the gun buyback program implemented by the Brown administration in the mid-2000s. Before this, the city was part of the nationwide crack epidemic of the 1980s and 1990s and its accompanying record-high crime levels. In 2018, city police began wearing 300 body cameras. A 2021 Partnership for the Public Good report noted that the BPD, which had a 2020–21 budget of about $145.7 million, had an above-average police-to-citizen ratio of 28.9 officers per 10,000 residents in 2020 – higher than peer cities Minneapolis and Toledo, Ohio. The force had a roster of 740 officers during the year, about two-thirds of whom handled emergency requests, road patrol and other non-office assignments. The department has been criticized for misconduct and brutality, including the 2004 wrongful termination of officer Cariol Horne for opposing police brutality toward a suspect and a 2020 protest-shoving incident. The Buffalo Fire Department and American Medical Response (AMR) handle fire-protection and emergency medical services (EMS) calls in the city. The fire department has about 710 firefighters and thirty-five stations, including twenty-three engine companies and twelve ladder companies. The department also operates the Edward M. Cotter, considered the world's oldest active fireboat. With vacant and abandoned homes prone to arson, squatting, prostitution and other criminal activities, the fire and police department's resources were overburdened before the 2010s. Buffalo ranked second nationwide to St. Louis for vacant homes per capita in 2007, and the city began a five-year program to demolish five thousand vacant, damaged and abandoned homes. On May 14, 2022, there was a mass shooting in a Tops supermarket on the East Side of Buffalo where 13 victims were shot in a racially motivated attack by a white supremacist who drove there from Conklin, over 200 miles away. Ten victims, all of whom were black, were murdered and three were injured. Media Buffalo's major daily newspaper is The Buffalo News. Established in 1880 as the Buffalo Evening News, the newspaper is estimated to have a daily circulation of 35,000 (down from a high of 310,000). The newspaper announced a pending sale of its building in February 2023, and the relocation of its printing operations to Cleveland, Ohio. Other newspapers in the Buffalo area include the Black-focused Buffalo Criterion and Challenger Community News, The Record of Buffalo State University, The Spectrum of the University at Buffalo, and Buffalo Business First. Investigative Post is an online watchdog news organization founded by former Buffalo News reporter and Pulitzer nominee Jim Heaney. Eighteen radio stations are licensed in Buffalo, including an FM station at Buffalo State College. Over ninety FM and AM radio signals can be received throughout the city. Eight full-power television outlets serve the city. Major commercial stations include WGRZ 2 (NBC), WIVB-TV 4 (CBS) and its sister station WNLO 23 (CW O&O), WKBW-TV 7 (ABC), and WUTV 29 (Fox, received in parts of Southern Ontario) and its sister station WNYO-TV 49 (MyNetworkTV). Buffalo's public television station is WNED-TV 17 (PBS); WNED has reported that most of the station's members live in the Greater Toronto Area. According to Nielsen Media Research, the Buffalo television market was the 51st largest in the United States as of 2020[update]. Movies shooting significant footage in Buffalo include Hide in Plain Sight (1980), Tuck Everlasting (1981), Best Friends (1982), The Natural (1984), Vamping (1984), Canadian Bacon (1995), Buffalo '66 (1998), Manna from Heaven (2002), Bruce Almighty (2003), The Savages (2007), Slime City Massacre (2010), Henry's Crime (2011), Sharknado 2: The Second One (2014), Killer Rack (2015), Teenage Mutant Ninja Turtles: Out of the Shadows (2016), Marshall (2016), The American Side (2017), The First Purge (2018), The True Adventures of Wolfboy (2019) and A Quiet Place Part II (2021). Although higher Buffalo production costs led to some films being finished elsewhere, tax credits and other economic incentives have enabled new film studios and production facilities to open. In 2021, several studio projects were in the planning stages. Education The Buffalo Public Schools have about thirty-four thousand students enrolled in their primary and secondary schools. The district administers about sixty public schools, including thirty-six primary schools, five middle high schools, fourteen high schools and three alternative schools, with a total of about 3,500 teachers. Its board of education, authorized by the state, has nine elected members who select the superintendent and oversee the budget, curriculum, personnel, and facilities. In 2020, the graduation rate was seventy-six percent. The public City Honors School was ranked the top high school in the city and 178th nationwide by U.S. News & World Report in 2021. There are twenty charter schools in Buffalo, with some oversight by the district. The city has over a dozen private schools, including Bishop Timon – St. Jude High School, Canisius High School, Mount Mercy Academy, and Nardin Academy—all Roman Catholic, and Darul Uloom Al-Madania and Universal School of Buffalo (both Islamic schools); nonsectarian options include Buffalo Seminary and the Nichols School. Founded by Millard Fillmore, the University at Buffalo (UB) is one of the State University of New York's two flagship universities and the state's largest public university. A Research I university, over 32,000 undergraduate, graduate and professional students attend its thirteen schools and colleges. Two of UB's three campuses (the South and Downtown Campuses) are in the city, but most university functions take place at the large North Campus in Amherst. In 2020, U.S. News & World Report ranked UB the 34th-best public university and 88th in national universities. Buffalo State College, founded as a normal school, is one of SUNY's thirteen comprehensive colleges. The city's four-year private institutions include Canisius University, D'Youville University, Trocaire College, and Villa Maria College. SUNY Erie, the county's two-year public higher-education institution, and the for-profit Bryant & Stratton College have small downtown campuses. Established in 1835, Buffalo's main library is the Central Library of the Buffalo & Erie County Public Library system. Rebuilt in 1964, it contains an auditorium, the original manuscript of the Adventures of Huckleberry Finn (donated by Mark Twain), and a collection of about two million books. Its Grosvenor Room maintains a special-collections listing of nearly five hundred thousand resources for researchers. A pocket park funded by Southwest Airlines opened in 2020, and brought landscaping improvements and seating to Lafayette Square. The system's free library cards are valid at the city's eight branch libraries and at member libraries throughout Erie County. Infrastructure Nine hospitals are operated in the city: Oishei Children's Hospital and Buffalo General Medical Center by Kaleida Health, Mercy Hospital and Sisters of Charity Hospital (Catholic Health), Roswell Park Comprehensive Cancer Center, the county-run Erie County Medical Center (ECMC), Buffalo VA Medical Center, BryLin (Psychiatric) Hospital and the state-operated Buffalo Psychiatric Center. John R. Oishei Children's Hospital, built in 2017, is adjacent to Buffalo General Medical Center on the 120-acre (49 ha) Buffalo Niagara Medical Campus north of downtown; its Gates Vascular Institute specializes in acute stroke recovery. The medical campus includes the University at Buffalo Jacobs School of Medicine and Biomedical Sciences, the Hauptman-Woodward Medical Research Institute and Roswell Park Comprehensive Cancer Center, ranked the 14th-best cancer-treatment center in the United States by U.S. News & World Report. Growth and changing transportation needs altered Buffalo's grid plan, which was developed by Joseph Ellicott in 1804. His plan laid out streets like the spokes of a wheel, naming them after Dutch landowners and Native American tribes. City streets expanded outward, denser in the west and spreading out east of Main Street. Buffalo is a port of entry with Canada; the Peace Bridge crosses the Niagara River and links the Niagara Thruway (I-190) and Queen Elizabeth Way. I-190, NY 5 and NY 33 are the primary expressways serving the city, carrying a total of over 245,000 vehicles daily.[j] NY 5 carries traffic to the Southtowns, and NY 33 carries traffic to the eastern suburbs and the Buffalo Airport. The east-west Scajacquada Expressway (NY 198) bisects Delaware Park, connecting I-190 with the Kensington Expressway (NY 33) on the city's East Side to form a partial beltway around the city center. The Scajacquada and Kensington Expressways and the Buffalo Skyway (NY 5) have been targeted for redesign or removal. Other major highways include US 62 on the city's East Side; NY 354 and a portion of NY 130, both east–west routes; and NY 265, NY 266 and NY 384, all north–south routes on the city's West Side. Buffalo has a higher-than-average percentage of households without a car: 30 percent in 2015, decreasing to 28.2 percent in 2016; the 2016 national average was 8.7 percent. Buffalo averaged 1.03 cars per household in 2016, compared to the national average of 1.8. The Niagara Frontier Transportation Authority (NFTA) operates the region's public transit, including its airport, light-rail system, buses, and harbors. The NFTA operates 323 buses on 61 lines throughout Western New York. Buffalo Metro Rail is a 6.4 mi-long (10.3 km) line which runs from Canalside to the University Heights district. The line's downtown section, south of the Fountain Plaza station, runs at grade and is free of charge. The Buffalo area ranks twenty-third nationwide in transit ridership, with thirty trips per capita per year. Expansions have been proposed since Buffalo Metro Rail's inception in the 1980s, with the latest plan (in the late 2010s) reaching the town of Amherst. Buffalo Niagara International Airport in Cheektowaga has daily scheduled flights by domestic, charter and regional carriers. The airport handled nearly five million passengers in 2019. It received a J.D. Power award in 2018 for customer satisfaction at a mid-sized airport, and underwent a $50 million expansion in 2020–21. The airport, light rail, small-boat harbor and buses are monitored by the NFTA's transit police. Buffalo has an Amtrak intercity train station, Buffalo–Exchange Street station, which was rebuilt in 2020. The city's eastern suburbs are served by Amtrak's Buffalo–Depew station in Depew, which was built in 1979. Buffalo was a major stop on through routes between Chicago and New York City through the lower Ontario Peninsula; trains stopped at Buffalo Central Terminal, which operated from 1929 to 1979. Intercity buses depart and arrive from the NFTA's Metropolitan Transportation Center on Ellicott Street. Since Buffalo adopted a complete streets policy in 2008, efforts have been made to accommodate cyclists and pedestrians into new infrastructure projects. Improved corridors have bike lanes, and Niagara Street received separate bike lanes in 2020. Walk Score gave Buffalo a "somewhat walkable" rating of 68 out of 100, with Allentown and downtown considered more walkable than other areas of the city. Buffalo's water system is operated by Veolia Water, and water treatment begins at the Colonel Francis G. Ward Pumping Station. When it opened in 1915, the station's capacity was second only to Paris. Wastewater is treated by the Buffalo Sewer Authority, its coverage extending to the eastern suburbs. National Grid and New York State Electric & Gas (NYSEG) provide electricity, and National Fuel Gas provides natural gas. The city's primary telecommunications provider is Spectrum; Verizon Fios serves the North Park neighborhood. A 2018 report by Ookla noted that Buffalo was one of the bottom five U.S. cities in average download speeds at 66 megabits per second. The city's Department of Public Works manages Buffalo's snow and trash removal and street cleaning. Snow removal generally operates from November 15 to April 1. A snow emergency is declared by the National Weather Service after a snowstorm, and the city's roads, major sidewalks and bridges are cleared by over seventy snowplows within 24 hours. Rock salt is the principal agent for preventing snow accumulation and melting ice. Snow removal may coincide with driving bans and parking restrictions. The area along the Outer Harbor is the most dangerous driving area during a snowstorm; when weather conditions dictate, the Buffalo Skyway is closed by the city's police department. To prevent ice jams which may impact hydroelectric plants in Niagara Falls, the New York Power Authority and Ontario Power Generation began installing an ice boom annually in 1964. The boom's installation date is temperature-dependent, and it is removed on April 1 unless there is more than 650 km2 (250 sq mi) of ice remaining on eastern Lake Erie. It stretches 2,680 m (8,790 ft) from the outer breakwall at the Buffalo Outer Harbor to the Canadian shore near Fort Erie. Originally made of wood, the boom now consists of steel pontoons. Notable residents Sister cities Buffalo has eighteen sister cities: See also Explanatory notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Nial] | [TOKENS: 714]
Contents Nial Nial (from "Nested Interactive Array Language") is a high-level array programming language developed from about 1981 by Mike Jenkins of Queen's University, Kingston, Ontario, Canada. Jenkins co-created the Jenkins–Traub algorithm. Nial combines a functional programming notation for arrays based on an array theory developed by Trenchard More with structured programming concepts for numeric, character, and symbolic data. It is most often used for prototyping and artificial intelligence. Q'Nial In 1982, Jenkins formed a company (Nial Systems Ltd) to market the language and the Q'Nial implementation of Nial. As of 2014, the company website supports an Open Source project for the Q'Nial software with the binary and source available for download. Its license is derived from Artistic License 1.0, the only differences being the preamble, the definition of "Copyright Holder" (which is changed from "whoever is named in the copyright or copyrights for the package" to "NIAL Systems Limited"), and an instance of "whoever" (which is changed to "whomever"). Nial concepts Nial uses a generalized and expressive Array Theory in its Version 4, but sacrificed some of the generality of functional model, and modified the Array Theory in the Version 6. Only Version 6 is available now. Nial defines all its data types as nested rectangular arrays. ints, booleans, chars etc. are considered as a solitary array or an array containing a single member. Arrays themselves can contain other arrays to form arbitrarily deep structures. Nial also provides Records. They are defined as non-homogenous array structure. Functions in Nial are called Operations. From Nial manual: "An operation is a functional object that is given an argument array and returns a result array. The process of executing an operation by giving it an argument value is called an operation call or an operation application." Application of operations Nial like other APL-derived languages allows the unification of binary operators and operations. Thus the below notations have the same meaning. Note: sum is same as + Binary operation: Array notation: Strand notation: Grouped notation: Nial also uses transformers which are higher order functions. They use the argument operation to construct a new modified operation. Atlas An atlas in Nial is an operation made up of an array of component operations. When an atlas is applied to a value, each element of the atlas is applied in turn to the value to provide an result. This is used to provide point free (without-variables) style of definitions. It is also used by the transformers. In the below examples 'inner [+,*]' the list '[+,*]' is an atlas. Examples Arrays can also be literal Shape gives the array dimensions and reshape can be used to reshape the dimensions. Definitions are of the form '<name> is <expression>' Contrast with APL Defining is_prime filter Count generates an array [1..N] and pass is N (identity operation). eachright applies is_divisible (pass, element) in each element of count-generated array. Thus this transforms the count-generated array into an array where numbers that can divide N are replaced by '1' and others by '0'. Hence if the number N is prime, sum [transformed array] must be 2 (itself and 1). Now all that remains is to generate another array using count N, and filter all that are not prime. Using it: References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Mars#cite_note-:3-88] | [TOKENS: 11899]
Contents Mars Mars is the fourth planet from the Sun. It is also known as the "Red Planet", for its orange-red appearance. Mars is a desert-like rocky planet with a tenuous atmosphere that is primarily carbon dioxide (CO2). At the average surface level the atmospheric pressure is a few thousandths of Earth's, atmospheric temperature ranges from −153 to 20 °C (−243 to 68 °F), and cosmic radiation is high. Mars retains some water, in the ground as well as thinly in the atmosphere, forming cirrus clouds, fog, frost, larger polar regions of permafrost and ice caps (with seasonal CO2 snow), but no bodies of liquid surface water. Its surface gravity is roughly a third of Earth's or double that of the Moon. Its diameter, 6,779 km (4,212 mi), is about half the Earth's, or twice the Moon's, and its surface area is the size of all the dry land of Earth. Fine dust is prevalent across the surface and the atmosphere, being picked up and spread at the low Martian gravity even by the weak wind of the tenuous atmosphere. The terrain of Mars roughly follows a north-south divide, the Martian dichotomy, with the northern hemisphere mainly consisting of relatively flat, low lying plains, and the southern hemisphere of cratered highlands. Geologically, the planet is fairly active with marsquakes trembling underneath the ground, but also hosts many enormous volcanoes that are extinct (the tallest is Olympus Mons, 21.9 km or 13.6 mi tall), as well as one of the largest canyons in the Solar System (Valles Marineris, 4,000 km or 2,500 mi long). Mars has two natural satellites that are small and irregular in shape: Phobos and Deimos. With a significant axial tilt of 25 degrees, Mars experiences seasons, like Earth (which has an axial tilt of 23.5 degrees). A Martian solar year is equal to 1.88 Earth years (687 Earth days), a Martian solar day (sol) is equal to 24.6 hours. Mars formed along with the other planets approximately 4.5 billion years ago. During the martian Noachian period (4.5 to 3.5 billion years ago), its surface was marked by meteor impacts, valley formation, erosion, the possible presence of water oceans and the loss of its magnetosphere. The Hesperian period (beginning 3.5 billion years ago and ending 3.3–2.9 billion years ago) was dominated by widespread volcanic activity and flooding that carved immense outflow channels. The Amazonian period, which continues to the present, is the currently dominating and remaining influence on geological processes. Because of Mars's geological history, the possibility of past or present life on Mars remains an area of active scientific investigation, with some possible traces needing further examination. Being visible with the naked eye in Earth's sky as a red wandering star, Mars has been observed throughout history, acquiring diverse associations in different cultures. In 1963 the first flight to Mars took place with Mars 1, but communication was lost en route. The first successful flyby exploration of Mars was conducted in 1965 with Mariner 4. In 1971 Mariner 9 entered orbit around Mars, being the first spacecraft to orbit any body other than the Moon, Sun or Earth; following in the same year were the first uncontrolled impact (Mars 2) and first successful landing (Mars 3) on Mars. Probes have been active on Mars continuously since 1997. At times, more than ten probes have simultaneously operated in orbit or on the surface, more than at any other planet beyond Earth. Mars is an often proposed target for future crewed exploration missions, though no such mission is currently planned. Natural history Scientists have theorized that during the Solar System's formation, Mars was created as the result of a random process of run-away accretion of material from the protoplanetary disk that orbited the Sun. Mars has many distinctive chemical features caused by its position in the Solar System. Elements with comparatively low boiling points, such as chlorine, phosphorus, and sulfur, are much more common on Mars than on Earth; these elements were probably pushed outward by the young Sun's energetic solar wind. After the formation of the planets, the inner Solar System may have been subjected to the so-called Late Heavy Bombardment. About 60% of the surface of Mars shows a record of impacts from that era, whereas much of the remaining surface is probably underlain by immense impact basins caused by those events. However, more recent modeling has disputed the existence of the Late Heavy Bombardment. There is evidence of an enormous impact basin in the Northern Hemisphere of Mars, spanning 10,600 by 8,500 kilometres (6,600 by 5,300 mi), or roughly four times the size of the Moon's South Pole–Aitken basin, which would be the largest impact basin yet discovered if confirmed. It has been hypothesized that the basin was formed when Mars was struck by a Pluto-sized body about four billion years ago. The event, thought to be the cause of the Martian hemispheric dichotomy, created the smooth Borealis basin that covers 40% of the planet. A 2023 study shows evidence, based on the orbital inclination of Deimos (a small moon of Mars), that Mars may once have had a ring system 3.5 billion years to 4 billion years ago. This ring system may have been formed from a moon, 20 times more massive than Phobos, orbiting Mars billions of years ago; and Phobos would be a remnant of that ring. Epochs: The geological history of Mars can be split into many periods, but the following are the three primary periods: Geological activity is still taking place on Mars. The Athabasca Valles is home to sheet-like lava flows created about 200 million years ago. Water flows in the grabens called the Cerberus Fossae occurred less than 20 million years ago, indicating equally recent volcanic intrusions. The Mars Reconnaissance Orbiter has captured images of avalanches. Physical characteristics Mars is approximately half the diameter of Earth or twice that of the Moon, with a surface area only slightly less than the total area of Earth's dry land. Mars is less dense than Earth, having about 15% of Earth's volume and 11% of Earth's mass, resulting in about 38% of Earth's surface gravity. Mars is the only presently known example of a desert planet, a rocky planet with a surface akin to that of Earth's deserts. The red-orange appearance of the Martian surface is caused by iron(III) oxide (nanophase Fe2O3) and the iron(III) oxide-hydroxide mineral goethite. It can look like butterscotch; other common surface colors include golden, brown, tan, and greenish, depending on the minerals present. Like Earth, Mars is differentiated into a dense metallic core overlaid by less dense rocky layers. The outermost layer is the crust, which is on average about 42–56 kilometres (26–35 mi) thick, with a minimum thickness of 6 kilometres (3.7 mi) in Isidis Planitia, and a maximum thickness of 117 kilometres (73 mi) in the southern Tharsis plateau. For comparison, Earth's crust averages 27.3 ± 4.8 km in thickness. The most abundant elements in the Martian crust are silicon, oxygen, iron, magnesium, aluminum, calcium, and potassium. Mars is confirmed to be seismically active; in 2019, it was reported that InSight had detected and recorded over 450 marsquakes and related events. Beneath the crust is a silicate mantle responsible for many of the tectonic and volcanic features on the planet's surface. The upper Martian mantle is a low-velocity zone, where the velocity of seismic waves is lower than surrounding depth intervals. The mantle appears to be rigid down to the depth of about 250 km, giving Mars a very thick lithosphere compared to Earth. Below this the mantle gradually becomes more ductile, and the seismic wave velocity starts to grow again. The Martian mantle does not appear to have a thermally insulating layer analogous to Earth's lower mantle; instead, below 1050 km in depth, it becomes mineralogically similar to Earth's transition zone. At the bottom of the mantle lies a basal liquid silicate layer approximately 150–180 km thick. The Martian mantle appears to be highly heterogenous, with dense fragments up to 4 km across, likely injected deep into the planet by colossal impacts ~4.5 billion years ago; high-frequency waves from eight marsquakes slowed as they passed these localized regions, and modeling indicates the heterogeneities are compositionally distinct debris preserved because Mars lacks plate tectonics and has a sluggishly convecting interior that prevents complete homogenization. Mars's iron and nickel core is at least partially molten, and may have a solid inner core. It is around half of Mars's radius, approximately 1650–1675 km, and is enriched in light elements such as sulfur, oxygen, carbon, and hydrogen. The temperature of the core is estimated to be 2000–2400 K, compared to 5400–6230 K for Earth's solid inner core. In 2025, based on data from the InSight lander, a group of researchers reported the detection of a solid inner core 613 kilometres (381 mi) ± 67 kilometres (42 mi) in radius. Mars is a terrestrial planet with a surface that consists of minerals containing silicon and oxygen, metals, and other elements that typically make up rock. The Martian surface is primarily composed of tholeiitic basalt, although parts are more silica-rich than typical basalt and may be similar to andesitic rocks on Earth, or silica glass. Regions of low albedo suggest concentrations of plagioclase feldspar, with northern low albedo regions displaying higher than normal concentrations of sheet silicates and high-silicon glass. Parts of the southern highlands include detectable amounts of high-calcium pyroxenes. Localized concentrations of hematite and olivine have been found. Much of the surface is deeply covered by finely grained iron(III) oxide dust. The Phoenix lander returned data showing Martian soil to be slightly alkaline and containing elements such as magnesium, sodium, potassium and chlorine. These nutrients are found in soils on Earth, and are necessary for plant growth. Experiments performed by the lander showed that the Martian soil has a basic pH of 7.7, and contains 0.6% perchlorate by weight, concentrations that are toxic to humans. Streaks are common across Mars and new ones appear frequently on steep slopes of craters, troughs, and valleys. The streaks are dark at first and get lighter with age. The streaks can start in a tiny area, then spread out for hundreds of metres. They have been seen to follow the edges of boulders and other obstacles in their path. The commonly accepted hypotheses include that they are dark underlying layers of soil revealed after avalanches of bright dust or dust devils. Several other explanations have been put forward, including those that involve water or even the growth of organisms. Environmental radiation levels on the surface are on average 0.64 millisieverts of radiation per day, and significantly less than the radiation of 1.84 millisieverts per day or 22 millirads per day during the flight to and from Mars. For comparison the radiation levels in low Earth orbit, where Earth's space stations orbit, are around 0.5 millisieverts of radiation per day. Hellas Planitia has the lowest surface radiation at about 0.342 millisieverts per day, featuring lava tubes southwest of Hadriacus Mons with potentially levels as low as 0.064 millisieverts per day, comparable to radiation levels during flights on Earth. Although Mars has no evidence of a structured global magnetic field, observations show that parts of the planet's crust have been magnetized, suggesting that alternating polarity reversals of its dipole field have occurred in the past. This paleomagnetism of magnetically susceptible minerals is similar to the alternating bands found on Earth's ocean floors. One hypothesis, published in 1999 and re-examined in October 2005 (with the help of the Mars Global Surveyor), is that these bands suggest plate tectonic activity on Mars four billion years ago, before the planetary dynamo ceased to function and the planet's magnetic field faded. Geography and features Although better remembered for mapping the Moon, Johann Heinrich von Mädler and Wilhelm Beer were the first areographers. They began by establishing that most of Mars's surface features were permanent and by more precisely determining the planet's rotation period. In 1840, Mädler combined ten years of observations and drew the first map of Mars. Features on Mars are named from a variety of sources. Albedo features are named for classical mythology. Craters larger than roughly 50 km are named for deceased scientists and writers and others who have contributed to the study of Mars. Smaller craters are named for towns and villages of the world with populations of less than 100,000. Large valleys are named for the word "Mars" or "star" in various languages; smaller valleys are named for rivers. Large albedo features retain many of the older names but are often updated to reflect new knowledge of the nature of the features. For example, Nix Olympica (the snows of Olympus) has become Olympus Mons (Mount Olympus). The surface of Mars as seen from Earth is divided into two kinds of areas, with differing albedo. The paler plains covered with dust and sand rich in reddish iron oxides were once thought of as Martian "continents" and given names like Arabia Terra (land of Arabia) or Amazonis Planitia (Amazonian plain). The dark features were thought to be seas, hence their names Mare Erythraeum, Mare Sirenum and Aurorae Sinus. The largest dark feature seen from Earth is Syrtis Major Planum. The permanent northern polar ice cap is named Planum Boreum. The southern cap is called Planum Australe. Mars's equator is defined by its rotation, but the location of its Prime Meridian was specified, as was Earth's (at Greenwich), by choice of an arbitrary point; Mädler and Beer selected a line for their first maps of Mars in 1830. After the spacecraft Mariner 9 provided extensive imagery of Mars in 1972, a small crater (later called Airy-0), located in the Sinus Meridiani ("Middle Bay" or "Meridian Bay"), was chosen by Merton E. Davies, Harold Masursky, and Gérard de Vaucouleurs for the definition of 0.0° longitude to coincide with the original selection. Because Mars has no oceans, and hence no "sea level", a zero-elevation surface had to be selected as a reference level; this is called the areoid of Mars, analogous to the terrestrial geoid. Zero altitude was defined by the height at which there is 610.5 Pa (6.105 mbar) of atmospheric pressure. This pressure corresponds to the triple point of water, and it is about 0.6% of the sea level surface pressure on Earth (0.006 atm). For mapping purposes, the United States Geological Survey divides the surface of Mars into thirty cartographic quadrangles, each named for a classical albedo feature it contains. In April 2023, The New York Times reported an updated global map of Mars based on images from the Hope spacecraft. A related, but much more detailed, global Mars map was released by NASA on 16 April 2023. The vast upland region Tharsis contains several massive volcanoes, which include the shield volcano Olympus Mons. The edifice is over 600 km (370 mi) wide. Because the mountain is so large, with complex structure at its edges, giving a definite height to it is difficult. Its local relief, from the foot of the cliffs which form its northwest margin to its peak, is over 21 km (13 mi), a little over twice the height of Mauna Kea as measured from its base on the ocean floor. The total elevation change from the plains of Amazonis Planitia, over 1,000 km (620 mi) to the northwest, to the summit approaches 26 km (16 mi), roughly three times the height of Mount Everest, which in comparison stands at just over 8.8 kilometres (5.5 mi). Consequently, Olympus Mons is either the tallest or second-tallest mountain in the Solar System; the only known mountain which might be taller is the Rheasilvia peak on the asteroid Vesta, at 20–25 km (12–16 mi). The dichotomy of Martian topography is striking: northern plains flattened by lava flows contrast with the southern highlands, pitted and cratered by ancient impacts. It is possible that, four billion years ago, the Northern Hemisphere of Mars was struck by an object one-tenth to two-thirds the size of Earth's Moon. If this is the case, the Northern Hemisphere of Mars would be the site of an impact crater 10,600 by 8,500 kilometres (6,600 by 5,300 mi) in size, or roughly the area of Europe, Asia, and Australia combined, surpassing Utopia Planitia and the Moon's South Pole–Aitken basin as the largest impact crater in the Solar System. Mars is scarred by 43,000 impact craters with a diameter of 5 kilometres (3.1 mi) or greater. The largest exposed crater is Hellas, which is 2,300 kilometres (1,400 mi) wide and 7,000 metres (23,000 ft) deep, and is a light albedo feature clearly visible from Earth. There are other notable impact features, such as Argyre, which is around 1,800 kilometres (1,100 mi) in diameter, and Isidis, which is around 1,500 kilometres (930 mi) in diameter. Due to the smaller mass and size of Mars, the probability of an object colliding with the planet is about half that of Earth. Mars is located closer to the asteroid belt, so it has an increased chance of being struck by materials from that source. Mars is more likely to be struck by short-period comets, i.e., those that lie within the orbit of Jupiter. Martian craters can[discuss] have a morphology that suggests the ground became wet after the meteor impact. The large canyon, Valles Marineris (Latin for 'Mariner Valleys, also known as Agathodaemon in the old canal maps), has a length of 4,000 kilometres (2,500 mi) and a depth of up to 7 kilometres (4.3 mi). The length of Valles Marineris is equivalent to the length of Europe and extends across one-fifth the circumference of Mars. By comparison, the Grand Canyon on Earth is only 446 kilometres (277 mi) long and nearly 2 kilometres (1.2 mi) deep. Valles Marineris was formed due to the swelling of the Tharsis area, which caused the crust in the area of Valles Marineris to collapse. In 2012, it was proposed that Valles Marineris is not just a graben, but a plate boundary where 150 kilometres (93 mi) of transverse motion has occurred, making Mars a planet with possibly a two-tectonic plate arrangement. Images from the Thermal Emission Imaging System (THEMIS) aboard NASA's Mars Odyssey orbiter have revealed seven possible cave entrances on the flanks of the volcano Arsia Mons. The caves, named after loved ones of their discoverers, are collectively known as the "seven sisters". Cave entrances measure from 100 to 252 metres (328 to 827 ft) wide and they are estimated to be at least 73 to 96 metres (240 to 315 ft) deep. Because light does not reach the floor of most of the caves, they may extend much deeper than these lower estimates and widen below the surface. "Dena" is the only exception; its floor is visible and was measured to be 130 metres (430 ft) deep. The interiors of these caverns may be protected from micrometeoroids, UV radiation, solar flares and high energy particles that bombard the planet's surface. Martian geysers (or CO2 jets) are putative sites of small gas and dust eruptions that occur in the south polar region of Mars during the spring thaw. "Dark dune spots" and "spiders" – or araneiforms – are the two most visible types of features ascribed to these eruptions. Similarly sized dust will settle from the thinner Martian atmosphere sooner than it would on Earth. For example, the dust suspended by the 2001 global dust storms on Mars only remained in the Martian atmosphere for 0.6 years, while the dust from Mount Pinatubo took about two years to settle. However, under current Martian conditions, the mass movements involved are generally much smaller than on Earth. Even the 2001 global dust storms on Mars moved only the equivalent of a very thin dust layer – about 3 μm thick if deposited with uniform thickness between 58° north and south of the equator. Dust deposition at the two rover sites has proceeded at a rate of about the thickness of a grain every 100 sols. Atmosphere Mars lost its magnetosphere 4 billion years ago, possibly because of numerous asteroid strikes, so the solar wind interacts directly with the Martian ionosphere, lowering the atmospheric density by stripping away atoms from the outer layer. Both Mars Global Surveyor and Mars Express have detected ionized atmospheric particles trailing off into space behind Mars, and this atmospheric loss is being studied by the MAVEN orbiter. Compared to Earth, the atmosphere of Mars is quite rarefied. Atmospheric pressure on the surface today ranges from a low of 30 Pa (0.0044 psi) on Olympus Mons to over 1,155 Pa (0.1675 psi) in Hellas Planitia, with a mean pressure at the surface level of 600 Pa (0.087 psi). The highest atmospheric density on Mars is equal to that found 35 kilometres (22 mi) above Earth's surface. The resulting mean surface pressure is only 0.6% of Earth's 101.3 kPa (14.69 psi). The scale height of the atmosphere is about 10.8 kilometres (6.7 mi), which is higher than Earth's 6 kilometres (3.7 mi), because the surface gravity of Mars is only about 38% of Earth's. The atmosphere of Mars consists of about 96% carbon dioxide, 1.93% argon and 1.89% nitrogen along with traces of oxygen and water. The atmosphere is quite dusty, containing particulates about 1.5 μm in diameter which give the Martian sky a tawny color when seen from the surface. It may take on a pink hue due to iron oxide particles suspended in it. Despite repeated detections of methane on Mars, there is no scientific consensus as to its origin. One suggestion is that methane exists on Mars and that its concentration fluctuates seasonally. The existence of methane could be produced by non-biological process such as serpentinization involving water, carbon dioxide, and the mineral olivine, which is known to be common on Mars, or by Martian life. Compared to Earth, its higher concentration of atmospheric CO2 and lower surface pressure may be why sound is attenuated more on Mars, where natural sources are rare apart from the wind. Using acoustic recordings collected by the Perseverance rover, researchers concluded that the speed of sound there is approximately 240 m/s for frequencies below 240 Hz, and 250 m/s for those above. Auroras have been detected on Mars. Because Mars lacks a global magnetic field, the types and distribution of auroras there differ from those on Earth; rather than being mostly restricted to polar regions as is the case on Earth, a Martian aurora can encompass the planet. In September 2017, NASA reported radiation levels on the surface of the planet Mars were temporarily doubled, and were associated with an aurora 25 times brighter than any observed earlier, due to a massive, and unexpected, solar storm in the middle of the month. Mars has seasons, alternating between its northern and southern hemispheres, similar to on Earth. Additionally the orbit of Mars has, compared to Earth's, a large eccentricity and approaches perihelion when it is summer in its southern hemisphere and winter in its northern, and aphelion when it is winter in its southern hemisphere and summer in its northern. As a result, the seasons in its southern hemisphere are more extreme and the seasons in its northern are milder than would otherwise be the case. The summer temperatures in the south can be warmer than the equivalent summer temperatures in the north by up to 30 °C (54 °F). Martian surface temperatures vary from lows of about −110 °C (−166 °F) to highs of up to 35 °C (95 °F) in equatorial summer. The wide range in temperatures is due to the thin atmosphere which cannot store much solar heat, the low atmospheric pressure (about 1% that of the atmosphere of Earth), and the low thermal inertia of Martian soil. The planet is 1.52 times as far from the Sun as Earth, resulting in just 43% of the amount of sunlight. Mars has the largest dust storms in the Solar System, reaching speeds of over 160 km/h (100 mph). These can vary from a storm over a small area, to gigantic storms that cover the entire planet. They tend to occur when Mars is closest to the Sun, and have been shown to increase global temperature. Seasons also produce dry ice covering polar ice caps. Hydrology While Mars contains water in larger amounts, most of it is dust covered water ice at the Martian polar ice caps. The volume of water ice in the south polar ice cap, if melted, would be enough to cover most of the surface of the planet with a depth of 11 metres (36 ft). Water in its liquid form cannot persist on the surface due to Mars's low atmospheric pressure, which is less than 1% that of Earth. Only at the lowest of elevations are the pressure and temperature high enough for liquid water to exist for short periods. Although little water is present in the atmosphere, there is enough to produce clouds of water ice and different cases of snow and frost, often mixed with snow of carbon dioxide dry ice. Landforms visible on Mars strongly suggest that liquid water has existed on the planet's surface. Huge linear swathes of scoured ground, known as outflow channels, cut across the surface in about 25 places. These are thought to be a record of erosion caused by the catastrophic release of water from subsurface aquifers, though some of these structures have been hypothesized to result from the action of glaciers or lava. One of the larger examples, Ma'adim Vallis, is 700 kilometres (430 mi) long, much greater than the Grand Canyon, with a width of 20 kilometres (12 mi) and a depth of 2 kilometres (1.2 mi) in places. It is thought to have been carved by flowing water early in Mars's history. The youngest of these channels is thought to have formed only a few million years ago. Elsewhere, particularly on the oldest areas of the Martian surface, finer-scale, dendritic networks of valleys are spread across significant proportions of the landscape. Features of these valleys and their distribution strongly imply that they were carved by runoff resulting from precipitation in early Mars history. Subsurface water flow and groundwater sapping may play important subsidiary roles in some networks, but precipitation was probably the root cause of the incision in almost all cases. Along craters and canyon walls, there are thousands of features that appear similar to terrestrial gullies. The gullies tend to be in the highlands of the Southern Hemisphere and face the Equator; all are poleward of 30° latitude. A number of authors have suggested that their formation process involves liquid water, probably from melting ice, although others have argued for formation mechanisms involving carbon dioxide frost or the movement of dry dust. No partially degraded gullies have formed by weathering and no superimposed impact craters have been observed, indicating that these are young features, possibly still active. Other geological features, such as deltas and alluvial fans preserved in craters, are further evidence for warmer, wetter conditions at an interval or intervals in earlier Mars history. Such conditions necessarily require the widespread presence of crater lakes across a large proportion of the surface, for which there is independent mineralogical, sedimentological and geomorphological evidence. Further evidence that liquid water once existed on the surface of Mars comes from the detection of specific minerals such as hematite and goethite, both of which sometimes form in the presence of water. The chemical signature of water vapor on Mars was first unequivocally demonstrated in 1963 by spectroscopy using an Earth-based telescope. In 2004, Opportunity detected the mineral jarosite. This forms only in the presence of acidic water, showing that water once existed on Mars. The Spirit rover found concentrated deposits of silica in 2007 that indicated wet conditions in the past, and in December 2011, the mineral gypsum, which also forms in the presence of water, was found on the surface by NASA's Mars rover Opportunity. It is estimated that the amount of water in the upper mantle of Mars, represented by hydroxyl ions contained within Martian minerals, is equal to or greater than that of Earth at 50–300 parts per million of water, which is enough to cover the entire planet to a depth of 200–1,000 metres (660–3,280 ft). On 18 March 2013, NASA reported evidence from instruments on the Curiosity rover of mineral hydration, likely hydrated calcium sulfate, in several rock samples including the broken fragments of "Tintina" rock and "Sutton Inlier" rock as well as in veins and nodules in other rocks like "Knorr" rock and "Wernicke" rock. Analysis using the rover's DAN instrument provided evidence of subsurface water, amounting to as much as 4% water content, down to a depth of 60 centimetres (24 in), during the rover's traverse from the Bradbury Landing site to the Yellowknife Bay area in the Glenelg terrain. In September 2015, NASA announced that they had found strong evidence of hydrated brine flows in recurring slope lineae, based on spectrometer readings of the darkened areas of slopes. These streaks flow downhill in Martian summer, when the temperature is above −23 °C, and freeze at lower temperatures. These observations supported earlier hypotheses, based on timing of formation and their rate of growth, that these dark streaks resulted from water flowing just below the surface. However, later work suggested that the lineae may be dry, granular flows instead, with at most a limited role for water in initiating the process. A definitive conclusion about the presence, extent, and role of liquid water on the Martian surface remains elusive. Researchers suspect much of the low northern plains of the planet were covered with an ocean hundreds of meters deep, though this theory remains controversial. In March 2015, scientists stated that such an ocean might have been the size of Earth's Arctic Ocean. This finding was derived from the ratio of protium to deuterium in the modern Martian atmosphere compared to that ratio on Earth. The amount of Martian deuterium (D/H = 9.3 ± 1.7 10−4) is five to seven times the amount on Earth (D/H = 1.56 10−4), suggesting that ancient Mars had significantly higher levels of water. Results from the Curiosity rover had previously found a high ratio of deuterium in Gale Crater, though not significantly high enough to suggest the former presence of an ocean. Other scientists caution that these results have not been confirmed, and point out that Martian climate models have not yet shown that the planet was warm enough in the past to support bodies of liquid water. Near the northern polar cap is the 81.4 kilometres (50.6 mi) wide Korolev Crater, which the Mars Express orbiter found to be filled with approximately 2,200 cubic kilometres (530 cu mi) of water ice. In November 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region. The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior (which is 12,100 cubic kilometers). During observations from 2018 through 2021, the ExoMars Trace Gas Orbiter spotted indications of water, probably subsurface ice, in the Valles Marineris canyon system. Orbital motion Mars's average distance from the Sun is roughly 230 million km (143 million mi), and its orbital period is 687 (Earth) days. The solar day (or sol) on Mars is only slightly longer than an Earth day: 24 hours, 39 minutes, and 35.244 seconds. A Martian year is equal to 1.8809 Earth years, or 1 year, 320 days, and 18.2 hours. The gravitational potential difference and thus the delta-v needed to transfer between Mars and Earth is the second lowest for Earth. The axial tilt of Mars is 25.19° relative to its orbital plane, which is similar to the axial tilt of Earth. As a result, Mars has seasons like Earth, though on Mars they are nearly twice as long because its orbital period is that much longer. In the present day, the orientation of the north pole of Mars is close to the star Deneb. Mars has a relatively pronounced orbital eccentricity of about 0.09; of the seven other planets in the Solar System, only Mercury has a larger orbital eccentricity. It is known that in the past, Mars has had a much more circular orbit. At one point, 1.35 million Earth years ago, Mars had an eccentricity of roughly 0.002, much less than that of Earth today. Mars's cycle of eccentricity is 96,000 Earth years compared to Earth's cycle of 100,000 years. Mars has its closest approach to Earth (opposition) in a synodic period of 779.94 days. It should not be confused with Mars conjunction, where the Earth and Mars are at opposite sides of the Solar System and form a straight line crossing the Sun. The average time between the successive oppositions of Mars, its synodic period, is 780 days; but the number of days between successive oppositions can range from 764 to 812. The distance at close approach varies between about 54 and 103 million km (34 and 64 million mi) due to the planets' elliptical orbits, which causes comparable variation in angular size. At their furthest Mars and Earth can be as far as 401 million km (249 million mi) apart. Mars comes into opposition from Earth every 2.1 years. The planets come into opposition near Mars's perihelion in 2003, 2018 and 2035, with the 2020 and 2033 events being particularly close to perihelic opposition. The mean apparent magnitude of Mars is +0.71 with a standard deviation of 1.05. Because the orbit of Mars is eccentric, the magnitude at opposition from the Sun can range from about −3.0 to −1.4. The minimum brightness is magnitude +1.86 when the planet is near aphelion and in conjunction with the Sun. At its brightest, Mars (along with Jupiter) is second only to Venus in apparent brightness. Mars usually appears distinctly yellow, orange, or red. When farthest away from Earth, it is more than seven times farther away than when it is closest. Mars is usually close enough for particularly good viewing once or twice at 15-year or 17-year intervals. Optical ground-based telescopes are typically limited to resolving features about 300 kilometres (190 mi) across when Earth and Mars are closest because of Earth's atmosphere. As Mars approaches opposition, it begins a period of retrograde motion, which means it will appear to move backwards in a looping curve with respect to the background stars. This retrograde motion lasts for about 72 days, and Mars reaches its peak apparent brightness in the middle of this interval. Moons Mars has two relatively small (compared to Earth's) natural moons, Phobos (about 22 km (14 mi) in diameter) and Deimos (about 12 km (7.5 mi) in diameter), which orbit at 9,376 km (5,826 mi) and 23,460 km (14,580 mi) around the planet. The origin of both moons is unclear, although a popular theory states that they were asteroids captured into Martian orbit. Both satellites were discovered in 1877 by Asaph Hall and were named after the characters Phobos (the deity of panic and fear) and Deimos (the deity of terror and dread), twins from Greek mythology who accompanied their father Ares, god of war, into battle. Mars was the Roman equivalent to Ares. In modern Greek, the planet retains its ancient name Ares (Aris: Άρης). From the surface of Mars, the motions of Phobos and Deimos appear different from that of the Earth's satellite, the Moon. Phobos rises in the west, sets in the east, and rises again in just 11 hours. Deimos, being only just outside synchronous orbit – where the orbital period would match the planet's period of rotation – rises as expected in the east, but slowly. Because the orbit of Phobos is below a synchronous altitude, tidal forces from Mars are gradually lowering its orbit. In about 50 million years, it could either crash into Mars's surface or break up into a ring structure around the planet. The origin of the two satellites is not well understood. Their low albedo and carbonaceous chondrite composition have been regarded as similar to asteroids, supporting a capture theory. The unstable orbit of Phobos would seem to point toward a relatively recent capture. But both have circular orbits near the equator, which is unusual for captured objects, and the required capture dynamics are complex. Accretion early in the history of Mars is plausible, but would not account for a composition resembling asteroids rather than Mars itself, if that is confirmed. Mars may have yet-undiscovered moons, smaller than 50 to 100 metres (160 to 330 ft) in diameter, and a dust ring is predicted to exist between Phobos and Deimos. A third possibility for their origin as satellites of Mars is the involvement of a third body or a type of impact disruption. More-recent lines of evidence for Phobos having a highly porous interior, and suggesting a composition containing mainly phyllosilicates and other minerals known from Mars, point toward an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's satellite. Although the visible and near-infrared (VNIR) spectra of the moons of Mars resemble those of outer-belt asteroids, the thermal infrared spectra of Phobos are reported to be inconsistent with chondrites of any class. It is also possible that Phobos and Deimos were fragments of an older moon, formed by debris from a large impact on Mars, and then destroyed by a more recent impact upon the satellite. More recently, a study conducted by a team of researchers from multiple countries suggests that a lost moon, at least fifteen times the size of Phobos, may have existed in the past. By analyzing rocks which point to tidal processes on the planet, it is possible that these tides may have been regulated by a past moon. Human observations and exploration The history of observations of Mars is marked by oppositions of Mars when the planet is closest to Earth and hence is most easily visible, which occur every couple of years. Even more notable are the perihelic oppositions of Mars, which are distinguished because Mars is close to perihelion, making it even closer to Earth. The ancient Sumerians named Mars Nergal, the god of war and plague. During Sumerian times, Nergal was a minor deity of little significance, but, during later times, his main cult center was the city of Nineveh. In Mesopotamian texts, Mars is referred to as the "star of judgement of the fate of the dead". The existence of Mars as a wandering object in the night sky was also recorded by the ancient Egyptian astronomers and, by 1534 BCE, they were familiar with the retrograde motion of the planet. By the period of the Neo-Babylonian Empire, the Babylonian astronomers were making regular records of the positions of the planets and systematic observations of their behavior. For Mars, they knew that the planet made 37 synodic periods, or 42 circuits of the zodiac, every 79 years. They invented arithmetic methods for making minor corrections to the predicted positions of the planets. In Ancient Greece, the planet was known as Πυρόεις. Commonly, the Greek name for the planet now referred to as Mars, was Ares. It was the Romans who named the planet Mars, for their god of war, often represented by the sword and shield of the planet's namesake. In the fourth century BCE, Aristotle noted that Mars disappeared behind the Moon during an occultation, indicating that the planet was farther away. Ptolemy, a Greek living in Alexandria, attempted to address the problem of the orbital motion of Mars. Ptolemy's model and his collective work on astronomy was presented in the multi-volume collection later called the Almagest (from the Arabic for "greatest"), which became the authoritative treatise on Western astronomy for the next fourteen centuries. Literature from ancient China confirms that Mars was known by Chinese astronomers by no later than the fourth century BCE. In the East Asian cultures, Mars is traditionally referred to as the "fire star" (火星) based on the Wuxing system. In 1609 Johannes Kepler published a 10 year study of Martian orbit, using the diurnal parallax of Mars, measured by Tycho Brahe, to make a preliminary calculation of the relative distance to the planet. From Brahe's observations of Mars, Kepler deduced that the planet orbited the Sun not in a circle, but in an ellipse. Moreover, Kepler showed that Mars sped up as it approached the Sun and slowed down as it moved farther away, in a manner that later physicists would explain as a consequence of the conservation of angular momentum.: 433–437 In 1610 the first use of a telescope for astronomical observation, including Mars, was performed by Italian astronomer Galileo Galilei. With the telescope the diurnal parallax of Mars was again measured in an effort to determine the Sun-Earth distance. This was first performed by Giovanni Domenico Cassini in 1672. The early parallax measurements were hampered by the quality of the instruments. The only occultation of Mars by Venus observed was that of 13 October 1590, seen by Michael Maestlin at Heidelberg. By the 19th century, the resolution of telescopes reached a level sufficient for surface features to be identified. On 5 September 1877, a perihelic opposition to Mars occurred. The Italian astronomer Giovanni Schiaparelli used a 22-centimetre (8.7 in) telescope in Milan to help produce the first detailed map of Mars. These maps notably contained features he called canali, which, with the possible exception of the natural canyon Valles Marineris, were later shown to be an optical illusion. These canali were supposedly long, straight lines on the surface of Mars, to which he gave names of famous rivers on Earth. His term, which means "channels" or "grooves", was popularly mistranslated in English as "canals". Influenced by the observations, the orientalist Percival Lowell founded an observatory which had 30- and 45-centimetre (12- and 18-in) telescopes. The observatory was used for the exploration of Mars during the last good opportunity in 1894, and the following less favorable oppositions. He published several books on Mars and life on the planet, which had a great influence on the public. The canali were independently observed by other astronomers, like Henri Joseph Perrotin and Louis Thollon in Nice, using one of the largest telescopes of that time. The seasonal changes (consisting of the diminishing of the polar caps and the dark areas formed during Martian summers) in combination with the canals led to speculation about life on Mars, and it was a long-held belief that Mars contained vast seas and vegetation. As bigger telescopes were used, fewer long, straight canali were observed. During observations in 1909 by Antoniadi with an 84-centimetre (33 in) telescope, irregular patterns were observed, but no canali were seen. The first spacecraft from Earth to visit Mars was Mars 1 of the Soviet Union, which flew by in 1963, but contact was lost en route. NASA's Mariner 4 followed and became the first spacecraft to successfully transmit from Mars; launched on 28 November 1964, it made its closest approach to the planet on 15 July 1965. Mariner 4 detected the weak Martian radiation belt, measured at about 0.1% that of Earth, and captured the first images of another planet from deep space. Once spacecraft visited the planet during the 1960s and 1970s, many previous concepts of Mars were radically broken. After the results of the Viking life-detection experiments, the hypothesis of a dead planet was generally accepted. The data from Mariner 9 and Viking allowed better maps of Mars to be made. Until 1997 and after Viking 1 shut down in 1982, Mars was only visited by three unsuccessful probes, two flying past without contact (Phobos 1, 1988; Mars Observer, 1993), and one (Phobos 2 1989) malfunctioning in orbit before reaching its destination Phobos. In 1997 Mars Pathfinder became the first successful rover mission beyond the Moon and started together with Mars Global Surveyor (operated until late 2006) an uninterrupted active robotic presence at Mars that has lasted until today. It produced complete, extremely detailed maps of the Martian topography, magnetic field and surface minerals. Starting with these missions a range of new improved crewless spacecraft, including orbiters, landers, and rovers, have been sent to Mars, with successful missions by the NASA (United States), Jaxa (Japan), ESA, United Kingdom, ISRO (India), Roscosmos (Russia), the United Arab Emirates, and CNSA (China) to study the planet's surface, climate, and geology, uncovering the different elements of the history and dynamic of the hydrosphere of Mars and possible traces of ancient life. As of 2023[update], Mars is host to ten functioning spacecraft. Eight are in orbit: 2001 Mars Odyssey, Mars Express, Mars Reconnaissance Orbiter, MAVEN, ExoMars Trace Gas Orbiter, the Hope orbiter, and the Tianwen-1 orbiter. Another two are on the surface: the Mars Science Laboratory Curiosity rover and the Perseverance rover. Collected maps are available online at websites including Google Mars. NASA provides two online tools: Mars Trek, which provides visualizations of the planet using data from 50 years of exploration, and Experience Curiosity, which simulates traveling on Mars in 3-D with Curiosity. Planned missions to Mars include: As of February 2024[update], debris from these types of missions has reached over seven tons. Most of it consists of crashed and inactive spacecraft as well as discarded components. In April 2024, NASA selected several companies to begin studies on providing commercial services to further enable robotic science on Mars. Key areas include establishing telecommunications, payload delivery and surface imaging. Habitability and habitation During the late 19th century, it was widely accepted in the astronomical community that Mars had life-supporting qualities, including the presence of oxygen and water. However, in 1894 W. W. Campbell at Lick Observatory observed the planet and found that "if water vapor or oxygen occur in the atmosphere of Mars it is in quantities too small to be detected by spectroscopes then available". That observation contradicted many of the measurements of the time and was not widely accepted. Campbell and V. M. Slipher repeated the study in 1909 using better instruments, but with the same results. It was not until the findings were confirmed by W. S. Adams in 1925 that the myth of the Earth-like habitability of Mars was finally broken. However, even in the 1960s, articles were published on Martian biology, putting aside explanations other than life for the seasonal changes on Mars. The current understanding of planetary habitability – the ability of a world to develop environmental conditions favorable to the emergence of life – favors planets that have liquid water on their surface. Most often this requires the orbit of a planet to lie within the habitable zone, which for the Sun is estimated to extend from within the orbit of Earth to about that of Mars. During perihelion, Mars dips inside this region, but Mars's thin (low-pressure) atmosphere prevents liquid water from existing over large regions for extended periods. The past flow of liquid water demonstrates the planet's potential for habitability. Recent evidence has suggested that any water on the Martian surface may have been too salty and acidic to support regular terrestrial life. The environmental conditions on Mars are a challenge to sustaining organic life: the planet has little heat transfer across its surface, it has poor insulation against bombardment by the solar wind due to the absence of a magnetosphere and has insufficient atmospheric pressure to retain water in a liquid form (water instead sublimes to a gaseous state). Mars is nearly, or perhaps totally, geologically dead; the end of volcanic activity has apparently stopped the recycling of chemicals and minerals between the surface and interior of the planet. Evidence suggests that the planet was once significantly more habitable than it is today, but whether living organisms ever existed there remains unknown. The Viking probes of the mid-1970s carried experiments designed to detect microorganisms in Martian soil at their respective landing sites and had positive results, including a temporary increase in CO2 production on exposure to water and nutrients. This sign of life was later disputed by scientists, resulting in a continuing debate, with NASA scientist Gilbert Levin asserting that Viking may have found life. A 2014 analysis of Martian meteorite EETA79001 found chlorate, perchlorate, and nitrate ions in sufficiently high concentrations to suggest that they are widespread on Mars. UV and X-ray radiation would turn chlorate and perchlorate ions into other, highly reactive oxychlorines, indicating that any organic molecules would have to be buried under the surface to survive. Small quantities of methane and formaldehyde detected by Mars orbiters are both claimed to be possible evidence for life, as these chemical compounds would quickly break down in the Martian atmosphere. Alternatively, these compounds may instead be replenished by volcanic or other geological means, such as serpentinite. Impact glass, formed by the impact of meteors, which on Earth can preserve signs of life, has also been found on the surface of the impact craters on Mars. Likewise, the glass in impact craters on Mars could have preserved signs of life, if life existed at the site. The Cheyava Falls rock discovered on Mars in June 2024 has been designated by NASA as a "potential biosignature" and was core sampled by the Perseverance rover for possible return to Earth and further examination. Although highly intriguing, no definitive final determination on a biological or abiotic origin of this rock can be made with the data currently available. Several plans for a human mission to Mars have been proposed, but none have come to fruition. The NASA Authorization Act of 2017 directed NASA to study the feasibility of a crewed Mars mission in the early 2030s; the resulting report concluded that this would be unfeasible. In addition, in 2021, China was planning to send a crewed Mars mission in 2033. Privately held companies such as SpaceX have also proposed plans to send humans to Mars, with the eventual goal to settle on the planet. As of 2024, SpaceX has proceeded with the development of the Starship launch vehicle with the goal of Mars colonization. In plans shared with the company in April 2024, Elon Musk envisions the beginning of a Mars colony within the next twenty years. This would be enabled by the planned mass manufacturing of Starship and initially sustained by resupply from Earth, and in situ resource utilization on Mars, until the Mars colony reaches full self sustainability. Any future human mission to Mars will likely take place within the optimal Mars launch window, which occurs every 26 months. The moon Phobos has been proposed as an anchor point for a space elevator. Besides national space agencies and space companies, groups such as the Mars Society and The Planetary Society advocate for human missions to Mars. In culture Mars is named after the Roman god of war (Greek Ares), but was also associated with the demi-god Heracles (Roman Hercules) by ancient Greek astronomers, as detailed by Aristotle. This association between Mars and war dates back at least to Babylonian astronomy, in which the planet was named for the god Nergal, deity of war and destruction. It persisted into modern times, as exemplified by Gustav Holst's orchestral suite The Planets, whose famous first movement labels Mars "The Bringer of War". The planet's symbol, a circle with a spear pointing out to the upper right, is also used as a symbol for the male gender. The symbol dates from at least the 11th century, though a possible predecessor has been found in the Greek Oxyrhynchus Papyri. The idea that Mars was populated by intelligent Martians became widespread in the late 19th century. Schiaparelli's "canali" observations combined with Percival Lowell's books on the subject put forward the standard notion of a planet that was a drying, cooling, dying world with ancient civilizations constructing irrigation works. Many other observations and proclamations by notable personalities added to what has been termed "Mars Fever". In the present day, high-resolution mapping of the surface of Mars has revealed no artifacts of habitation, but pseudoscientific speculation about intelligent life on Mars still continues. Reminiscent of the canali observations, these speculations are based on small scale features perceived in the spacecraft images, such as "pyramids" and the "Face on Mars". In his book Cosmos, planetary astronomer Carl Sagan wrote: "Mars has become a kind of mythic arena onto which we have projected our Earthly hopes and fears." The depiction of Mars in fiction has been stimulated by its dramatic red color and by nineteenth-century scientific speculations that its surface conditions might support not just life but intelligent life. This gave way to many science fiction stories involving these concepts, such as H. G. Wells's The War of the Worlds, in which Martians seek to escape their dying planet by invading Earth; Ray Bradbury's The Martian Chronicles, in which human explorers accidentally destroy a Martian civilization; as well as Edgar Rice Burroughs's series Barsoom, C. S. Lewis's novel Out of the Silent Planet (1938), and a number of Robert A. Heinlein stories before the mid-sixties. Since then, depictions of Martians have also extended to animation. A comic figure of an intelligent Martian, Marvin the Martian, appeared in Haredevil Hare (1948) as a character in the Looney Tunes animated cartoons of Warner Brothers, and has continued as part of popular culture to the present. After the Mariner and Viking spacecraft had returned pictures of Mars as a lifeless and canal-less world, these ideas about Mars were abandoned; for many science-fiction authors, the new discoveries initially seemed like a constraint, but eventually the post-Viking knowledge of Mars became itself a source of inspiration for works like Kim Stanley Robinson's Mars trilogy. See also Notes References Further reading External links Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Local Volume → Virgo Supercluster → Laniakea Supercluster → Pisces–Cetus Supercluster Complex → Local Hole → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
========================================
[SOURCE: https://en.wikipedia.org/wiki/United_States#cite_note-86] | [TOKENS: 17273]
Contents United States The United States of America (USA), also known as the United States (U.S.) or America, is a country primarily located in North America. It is a federal republic of 50 states and a federal capital district, Washington, D.C. The 48 contiguous states border Canada to the north and Mexico to the south, with the semi-exclave of Alaska in the northwest and the archipelago of Hawaii in the Pacific Ocean. The United States also asserts sovereignty over five major island territories and various uninhabited islands in Oceania and the Caribbean.[j] It is a megadiverse country, with the world's third-largest land area[c] and third-largest population, exceeding 341 million.[k] Paleo-Indians first migrated from North Asia to North America at least 15,000 years ago, and formed various civilizations. Spanish colonization established Spanish Florida in 1513, the first European colony in what is now the continental United States. British colonization followed with the 1607 settlement of Virginia, the first of the Thirteen Colonies. Enslavement of Africans was practiced in all colonies by 1770 and supplied most of the labor for the Southern Colonies' plantation economy. Clashes with the British Crown began as a civil protest over the illegality of taxation without representation in Parliament and the denial of other English rights. They evolved into the American Revolution, which led to the Declaration of Independence and a society based on universal rights. Victory in the 1775–1783 Revolutionary War brought international recognition of U.S. sovereignty and fueled westward expansion, further dispossessing native inhabitants. As more states were admitted, a North–South division over slavery led the Confederate States of America to declare secession and fight the Union in the 1861–1865 American Civil War. With the United States' victory and reunification, slavery was abolished nationally. By the late 19th century, the U.S. economy outpaced the French, German and British economies combined. As of 1900, the country had established itself as a great power, a status solidified after its involvement in World War I. Following Japan's attack on Pearl Harbor in 1941, the U.S. entered World War II. Its aftermath left the U.S. and the Soviet Union as rival superpowers, competing for ideological dominance and international influence during the Cold War. The Soviet Union's collapse in 1991 ended the Cold War, leaving the U.S. as the world's sole superpower. The U.S. federal government is a representative democracy with a president and a constitution that grants separation of powers under three branches: legislative, executive, and judicial. The United States Congress is a bicameral national legislature composed of the House of Representatives (a lower house based on population) and the Senate (an upper house based on equal representation for each state). Federalism grants substantial autonomy to the 50 states. In addition, 574 Native American tribes have sovereignty rights, and there are 326 Native American reservations. Since the 1850s, the Democratic and Republican parties have dominated American politics. American ideals and values are based on a democratic tradition inspired by the American Enlightenment movement. A developed country, the U.S. ranks high in economic competitiveness, innovation, and higher education. Accounting for over a quarter of nominal global GDP, its economy has been the world's largest since about 1890. It is the wealthiest country, with the highest disposable household income per capita among OECD members, though its wealth inequality is highly pronounced. Shaped by centuries of immigration, the culture of the U.S. is diverse and globally influential. Making up more than a third of global military spending, the country has one of the strongest armed forces and is a designated nuclear state. A member of numerous international organizations, the U.S. plays a major role in global political, cultural, economic, and military affairs. Etymology Documented use of the phrase "United States of America" dates back to January 2, 1776. On that day, Stephen Moylan, a Continental Army aide to General George Washington, wrote a letter to Joseph Reed, Washington's aide-de-camp, seeking to go "with full and ample powers from the United States of America to Spain" to seek assistance in the Revolutionary War effort. The first known public usage is an anonymous essay published in the Williamsburg newspaper The Virginia Gazette on April 6, 1776. Sometime on or after June 11, 1776, Thomas Jefferson wrote "United States of America" in a rough draft of the Declaration of Independence, which was adopted by the Second Continental Congress on July 4, 1776. The term "United States" and its initialism "U.S.", used as nouns or as adjectives in English, are common short names for the country. The initialism "USA", a noun, is also common. "United States" and "U.S." are the established terms throughout the U.S. federal government, with prescribed rules.[l] "The States" is an established colloquial shortening of the name, used particularly from abroad; "stateside" is the corresponding adjective or adverb. "America" is the feminine form of the first word of Americus Vesputius, the Latinized name of Italian explorer Amerigo Vespucci (1454–1512);[m] it was first used as a place name by the German cartographers Martin Waldseemüller and Matthias Ringmann in 1507.[n] Vespucci first proposed that the West Indies discovered by Christopher Columbus in 1492 were part of a previously unknown landmass and not among the Indies at the eastern limit of Asia. In English, the term "America" usually does not refer to topics unrelated to the United States, despite the usage of "the Americas" to describe the totality of the continents of North and South America. History The first inhabitants of North America migrated from Siberia approximately 15,000 years ago, either across the Bering land bridge or along the now-submerged Ice Age coastline. Small isolated groups of hunter-gatherers are said to have migrated alongside herds of large herbivores far into Alaska, with ice-free corridors developing along the Pacific coast and valleys of North America in c. 16,500 – c. 13,500 BCE (c. 18,500 – c. 15,500 BP). The Clovis culture, which appeared around 11,000 BCE, is believed to be the first widespread culture in the Americas. Over time, Indigenous North American cultures grew increasingly sophisticated, and some, such as the Mississippian culture, developed agriculture, architecture, and complex societies. In the post-archaic period, the Mississippian cultures were located in the midwestern, eastern, and southern regions, and the Algonquian in the Great Lakes region and along the Eastern Seaboard, while the Hohokam culture and Ancestral Puebloans inhabited the Southwest. Native population estimates of what is now the United States before the arrival of European colonizers range from around 500,000 to nearly 10 million. Christopher Columbus began exploring the Caribbean for Spain in 1492, leading to Spanish-speaking settlements and missions from what are now Puerto Rico and Florida to New Mexico and California. The first Spanish colony in the present-day continental United States was Spanish Florida, chartered in 1513. After several settlements failed there due to starvation and disease, Spain's first permanent town, Saint Augustine, was founded in 1565. France established its own settlements in French Florida in 1562, but they were either abandoned (Charlesfort, 1578) or destroyed by Spanish raids (Fort Caroline, 1565). Permanent French settlements were founded much later along the Great Lakes (Fort Detroit, 1701), the Mississippi River (Saint Louis, 1764) and especially the Gulf of Mexico (New Orleans, 1718). Early European colonies also included the thriving Dutch colony of New Nederland (settled 1626, present-day New York) and the small Swedish colony of New Sweden (settled 1638 in what became Delaware). British colonization of the East Coast began with the Virginia Colony (1607) and the Plymouth Colony (Massachusetts, 1620). The Mayflower Compact in Massachusetts and the Fundamental Orders of Connecticut established precedents for local representative self-governance and constitutionalism that would develop throughout the American colonies. While European settlers in what is now the United States experienced conflicts with Native Americans, they also engaged in trade, exchanging European tools for food and animal pelts.[o] Relations ranged from close cooperation to warfare and massacres. The colonial authorities often pursued policies that forced Native Americans to adopt European lifestyles, including conversion to Christianity. Along the eastern seaboard, settlers trafficked Africans through the Atlantic slave trade, largely to provide manual labor on plantations. The original Thirteen Colonies[p] that would later found the United States were administered as possessions of the British Empire by Crown-appointed governors, though local governments held elections open to most white male property owners. The colonial population grew rapidly from Maine to Georgia, eclipsing Native American populations; by the 1770s, the natural increase of the population was such that only a small minority of Americans had been born overseas. The colonies' distance from Britain facilitated the entrenchment of self-governance, and the First Great Awakening, a series of Christian revivals, fueled colonial interest in guaranteed religious liberty. Following its victory in the French and Indian War, Britain began to assert greater control over local affairs in the Thirteen Colonies, resulting in growing political resistance. One of the primary grievances of the colonists was the denial of their rights as Englishmen, particularly the right to representation in the British government that taxed them. To demonstrate their dissatisfaction and resolve, the First Continental Congress met in 1774 and passed the Continental Association, a colonial boycott of British goods enforced by local "committees of safety" that proved effective. The British attempt to then disarm the colonists resulted in the 1775 Battles of Lexington and Concord, igniting the American Revolutionary War. At the Second Continental Congress, the colonies appointed George Washington commander-in-chief of the Continental Army, and created a committee that named Thomas Jefferson to draft the Declaration of Independence. Two days after the Second Continental Congress passed the Lee Resolution to create an independent, sovereign nation, the Declaration was adopted on July 4, 1776. The political values of the American Revolution evolved from an armed rebellion demanding reform within an empire to a revolution that created a new social and governing system founded on the defense of liberty and the protection of inalienable natural rights; sovereignty of the people; republicanism over monarchy, aristocracy, and other hereditary political power; civic virtue; and an intolerance of political corruption. The Founding Fathers of the United States, who included Washington, Jefferson, John Adams, Benjamin Franklin, Alexander Hamilton, John Jay, James Madison, Thomas Paine, and many others, were inspired by Classical, Renaissance, and Enlightenment philosophies and ideas. Though in practical effect since its drafting in 1777, the Articles of Confederation was ratified in 1781 and formally established a decentralized government that operated until 1789. After the British surrender at the siege of Yorktown in 1781, American sovereignty was internationally recognized by the Treaty of Paris (1783), through which the U.S. gained territory stretching west to the Mississippi River, north to present-day Canada, and south to Spanish Florida. The Northwest Ordinance (1787) established the precedent by which the country's territory would expand with the admission of new states, rather than the expansion of existing states. The U.S. Constitution was drafted at the 1787 Constitutional Convention to overcome the limitations of the Articles. It went into effect in 1789, creating a federal republic governed by three separate branches that together formed a system of checks and balances. George Washington was elected the country's first president under the Constitution, and the Bill of Rights was adopted in 1791 to allay skeptics' concerns about the power of the more centralized government. His resignation as commander-in-chief after the Revolutionary War and his later refusal to run for a third term as the country's first president established a precedent for the supremacy of civil authority in the United States and the peaceful transfer of power. In the late 18th century, American settlers began to expand westward in larger numbers, many with a sense of manifest destiny. The Louisiana Purchase of 1803 from France nearly doubled the territory of the United States. Lingering issues with Britain remained, leading to the War of 1812, which was fought to a draw. Spain ceded Florida and its Gulf Coast territory in 1819. The Missouri Compromise of 1820, which admitted Missouri as a slave state and Maine as a free state, attempted to balance the desire of northern states to prevent the expansion of slavery into new territories with that of southern states to extend it there. Primarily, the compromise prohibited slavery in all other lands of the Louisiana Purchase north of the 36°30′ parallel. As Americans expanded further into territory inhabited by Native Americans, the federal government implemented policies of Indian removal or assimilation. The most significant such legislation was the Indian Removal Act of 1830, a key policy of President Andrew Jackson. It resulted in the Trail of Tears (1830–1850), in which an estimated 60,000 Native Americans living east of the Mississippi River were forcibly removed and displaced to lands far to the west, causing 13,200 to 16,700 deaths along the forced march. Settler expansion as well as this influx of Indigenous peoples from the East resulted in the American Indian Wars west of the Mississippi. During the colonial period, slavery became legal in all the Thirteen colonies, but by 1770 it provided the main labor force in the large-scale, agriculture-dependent economies of the Southern Colonies from Maryland to Georgia. The practice began to be significantly questioned during the American Revolution, and spurred by an active abolitionist movement that had reemerged in the 1830s, states in the North enacted laws to prohibit slavery within their boundaries. At the same time, support for slavery had strengthened in Southern states, with widespread use of inventions such as the cotton gin (1793) having made slavery immensely profitable for Southern elites. The United States annexed the Republic of Texas in 1845, and the 1846 Oregon Treaty led to U.S. control of the present-day American Northwest. Dispute with Mexico over Texas led to the Mexican–American War (1846–1848). After the victory of the U.S., Mexico recognized U.S. sovereignty over Texas, New Mexico, and California in the 1848 Mexican Cession; the cession's lands also included the future states of Nevada, Colorado and Utah. The California gold rush of 1848–1849 spurred a huge migration of white settlers to the Pacific coast, leading to even more confrontations with Native populations. One of the most violent, the California genocide of thousands of Native inhabitants, lasted into the mid-1870s. Additional western territories and states were created. Throughout the 1850s, the sectional conflict regarding slavery was further inflamed by national legislation in the U.S. Congress and decisions of the Supreme Court. In Congress, the Fugitive Slave Act of 1850 mandated the forcible return to their owners in the South of slaves taking refuge in non-slave states, while the Kansas–Nebraska Act of 1854 effectively gutted the anti-slavery requirements of the Missouri Compromise. In its Dred Scott decision of 1857, the Supreme Court ruled against a slave brought into non-slave territory, simultaneously declaring the entire Missouri Compromise to be unconstitutional. These and other events exacerbated tensions between North and South that would culminate in the American Civil War (1861–1865). Beginning with South Carolina, 11 slave-state governments voted to secede from the United States in 1861, joining to create the Confederate States of America. All other state governments remained loyal to the Union.[q] War broke out in April 1861 after the Confederacy bombarded Fort Sumter. Following the Emancipation Proclamation on January 1, 1863, many freed slaves joined the Union army. The war began to turn in the Union's favor following the 1863 Siege of Vicksburg and Battle of Gettysburg, and the Confederates surrendered in 1865 after the Union's victory in the Battle of Appomattox Court House. Efforts toward reconstruction in the secessionist South had begun as early as 1862, but it was only after President Lincoln's assassination that the three Reconstruction Amendments to the Constitution were ratified to protect civil rights. The amendments codified nationally the abolition of slavery and involuntary servitude except as punishment for crimes, promised equal protection under the law for all persons, and prohibited discrimination on the basis of race or previous enslavement. As a result, African Americans took an active political role in ex-Confederate states in the decade following the Civil War. The former Confederate states were readmitted to the Union, beginning with Tennessee in 1866 and ending with Georgia in 1870. National infrastructure, including transcontinental telegraph and railroads, spurred growth in the American frontier. This was accelerated by the Homestead Acts, through which nearly 10 percent of the total land area of the United States was given away free to some 1.6 million homesteaders. From 1865 through 1917, an unprecedented stream of immigrants arrived in the United States, including 24.4 million from Europe. Most came through the Port of New York, as New York City and other large cities on the East Coast became home to large Jewish, Irish, and Italian populations. Many Northern Europeans as well as significant numbers of Germans and other Central Europeans moved to the Midwest. At the same time, about one million French Canadians migrated from Quebec to New England. During the Great Migration, millions of African Americans left the rural South for urban areas in the North. Alaska was purchased from Russia in 1867. The Compromise of 1877 is generally considered the end of the Reconstruction era, as it resolved the electoral crisis following the 1876 presidential election and led President Rutherford B. Hayes to reduce the role of federal troops in the South. Immediately, the Redeemers began evicting the Carpetbaggers and quickly regained local control of Southern politics in the name of white supremacy. African Americans endured a period of heightened, overt racism following Reconstruction, a time often considered the nadir of American race relations. A series of Supreme Court decisions, including Plessy v. Ferguson, emptied the Fourteenth and Fifteenth Amendments of their force, allowing Jim Crow laws in the South to remain unchecked, sundown towns in the Midwest, and segregation in communities across the country, which would be reinforced in part by the policy of redlining later adopted by the federal Home Owners' Loan Corporation. An explosion of technological advancement, accompanied by the exploitation of cheap immigrant labor, led to rapid economic expansion during the Gilded Age of the late 19th century. It continued into the early 20th, when the United States already outpaced the economies of Britain, France, and Germany combined. This fostered the amassing of power by a few prominent industrialists, largely by their formation of trusts and monopolies to prevent competition. Tycoons led the nation's expansion in the railroad, petroleum, and steel industries. The United States emerged as a pioneer of the automotive industry. These changes resulted in significant increases in economic inequality, slum conditions, and social unrest, creating the environment for labor unions and socialist movements to begin to flourish. This period eventually ended with the advent of the Progressive Era, which was characterized by significant economic and social reforms. Pro-American elements in Hawaii overthrew the Hawaiian monarchy; the islands were annexed in 1898. That same year, Puerto Rico, the Philippines, and Guam were ceded to the U.S. by Spain after the latter's defeat in the Spanish–American War. (The Philippines was granted full independence from the U.S. on July 4, 1946, following World War II. Puerto Rico and Guam have remained U.S. territories.) American Samoa was acquired by the United States in 1900 after the Second Samoan Civil War. The U.S. Virgin Islands were purchased from Denmark in 1917. The United States entered World War I alongside the Allies in 1917 helping to turn the tide against the Central Powers. In 1920, a constitutional amendment granted nationwide women's suffrage. During the 1920s and 1930s, radio for mass communication and early television transformed communications nationwide. The Wall Street Crash of 1929 triggered the Great Depression, to which President Franklin D. Roosevelt responded with the New Deal plan of "reform, recovery and relief", a series of unprecedented and sweeping recovery programs and employment relief projects combined with financial reforms and regulations. Initially neutral during World War II, the U.S. began supplying war materiel to the Allies of World War II in March 1941 and entered the war in December after Japan's attack on Pearl Harbor. Agreeing to a "Europe first" policy, the U.S. concentrated its wartime efforts on Japan's allies Italy and Germany until their final defeat in May 1945. The U.S. developed the first nuclear weapons and used them against the Japanese cities of Hiroshima and Nagasaki in August 1945, ending the war. The United States was one of the "Four Policemen" who met to plan the post-war world, alongside the United Kingdom, the Soviet Union, and China. The U.S. emerged relatively unscathed from the war, with even greater economic power and international political influence. The end of World War II in 1945 left the U.S. and the Soviet Union as superpowers, each with its own political, military, and economic sphere of influence. Geopolitical tensions between the two superpowers soon led to the Cold War. The U.S. implemented a policy of containment intended to limit the Soviet Union's sphere of influence; engaged in regime change against governments perceived to be aligned with the Soviets; and prevailed in the Space Race, which culminated with the first crewed Moon landing in 1969. Domestically, the U.S. experienced economic growth, urbanization, and population growth following World War II. The civil rights movement emerged, with Martin Luther King Jr. becoming a prominent leader in the early 1960s. The Great Society plan of President Lyndon B. Johnson's administration resulted in groundbreaking and broad-reaching laws, policies and a constitutional amendment to counteract some of the worst effects of lingering institutional racism. The counterculture movement in the U.S. brought significant social changes, including the liberalization of attitudes toward recreational drug use and sexuality. It also encouraged open defiance of the military draft (leading to the end of conscription in 1973) and wide opposition to U.S. intervention in Vietnam, with the U.S. totally withdrawing in 1975. A societal shift in the roles of women was significantly responsible for the large increase in female paid labor participation starting in the 1970s, and by 1985 the majority of American women aged 16 and older were employed. The Fall of Communism and the dissolution of the Soviet Union from 1989 to 1991 marked the end of the Cold War and left the United States as the world's sole superpower. This cemented the United States' global influence, reinforcing the concept of the "American Century" as the U.S. dominated international political, cultural, economic, and military affairs. The 1990s saw the longest recorded economic expansion in American history, a dramatic decline in U.S. crime rates, and advances in technology. Throughout this decade, technological innovations such as the World Wide Web, the evolution of the Pentium microprocessor in accordance with Moore's law, rechargeable lithium-ion batteries, the first gene therapy trial, and cloning either emerged in the U.S. or were improved upon there. The Human Genome Project was formally launched in 1990, while Nasdaq became the first stock market in the United States to trade online in 1998. In the Gulf War of 1991, an American-led international coalition of states expelled an Iraqi invasion force that had occupied neighboring Kuwait. The September 11 attacks on the United States in 2001 by the pan-Islamist militant organization al-Qaeda led to the war on terror and subsequent military interventions in Afghanistan and in Iraq. The U.S. housing bubble culminated in 2007 with the Great Recession, the largest economic contraction since the Great Depression. In the 2010s and early 2020s, the United States has experienced increased political polarization and democratic backsliding. The country's polarization was violently reflected in the January 2021 Capitol attack, when a mob of insurrectionists entered the U.S. Capitol and sought to prevent the peaceful transfer of power in an attempted self-coup d'état. Geography The United States is the world's third-largest country by total area behind Russia and Canada.[c] The 48 contiguous states and the District of Columbia have a combined area of 3,119,885 square miles (8,080,470 km2). In 2021, the United States had 8% of the Earth's permanent meadows and pastures and 10% of its cropland. Starting in the east, the coastal plain of the Atlantic seaboard gives way to inland forests and rolling hills in the Piedmont plateau region. The Appalachian Mountains and the Adirondack Massif separate the East Coast from the Great Lakes and the grasslands of the Midwest. The Mississippi River System, the world's fourth-longest river system, runs predominantly north–south through the center of the country. The flat and fertile prairie of the Great Plains stretches to the west, interrupted by a highland region in the southeast. The Rocky Mountains, west of the Great Plains, extend north to south across the country, peaking at over 14,000 feet (4,300 m) in Colorado. The supervolcano underlying Yellowstone National Park in the Rocky Mountains, the Yellowstone Caldera, is the continent's largest volcanic feature. Farther west are the rocky Great Basin and the Chihuahuan, Sonoran, and Mojave deserts. In the northwest corner of Arizona, carved by the Colorado River, is the Grand Canyon, a steep-sided canyon and popular tourist destination known for its overwhelming visual size and intricate, colorful landscape. The Cascade and Sierra Nevada mountain ranges run close to the Pacific coast. The lowest and highest points in the contiguous United States are in the State of California, about 84 miles (135 km) apart. At an elevation of 20,310 feet (6,190.5 m), Alaska's Denali (also called Mount McKinley) is the highest peak in the country and on the continent. Active volcanoes in the U.S. are common throughout Alaska's Alexander and Aleutian Islands. Located entirely outside North America, the archipelago of Hawaii consists of volcanic islands, physiographically and ethnologically part of the Polynesian subregion of Oceania. In addition to its total land area, the United States has one of the world's largest marine exclusive economic zones spanning approximately 4.5 million square miles (11.7 million km2) of ocean. With its large size and geographic variety, the United States includes most climate types. East of the 100th meridian, the climate ranges from humid continental in the north to humid subtropical in the south. The western Great Plains are semi-arid. Many mountainous areas of the American West have an alpine climate. The climate is arid in the Southwest, Mediterranean in coastal California, and oceanic in coastal Oregon, Washington, and southern Alaska. Most of Alaska is subarctic or polar. Hawaii, the southern tip of Florida and U.S. territories in the Caribbean and Pacific are tropical. The United States receives more high-impact extreme weather incidents than any other country. States bordering the Gulf of Mexico are prone to hurricanes, and most of the world's tornadoes occur in the country, mainly in Tornado Alley. Due to climate change in the country, extreme weather has become more frequent in the U.S. in the 21st century, with three times the number of reported heat waves compared to the 1960s. Since the 1990s, droughts in the American Southwest have become more persistent and more severe. The regions considered as the most attractive to the population are the most vulnerable. The U.S. is one of 17 megadiverse countries containing large numbers of endemic species: about 17,000 species of vascular plants occur in the contiguous United States and Alaska, and over 1,800 species of flowering plants are found in Hawaii, few of which occur on the mainland. The United States is home to 428 mammal species, 784 birds, 311 reptiles, 295 amphibians, and around 91,000 insect species. There are 63 national parks, and hundreds of other federally managed monuments, forests, and wilderness areas, administered by the National Park Service and other agencies. About 28% of the country's land is publicly owned and federally managed, primarily in the Western States. Most of this land is protected, though some is leased for commercial use, and less than one percent is used for military purposes. Environmental issues in the United States include debates on non-renewable resources and nuclear energy, air and water pollution, biodiversity, logging and deforestation, and climate change. The U.S. Environmental Protection Agency (EPA) is the federal agency charged with addressing most environmental-related issues. The idea of wilderness has shaped the management of public lands since 1964, with the Wilderness Act. The Endangered Species Act of 1973 provides a way to protect threatened and endangered species and their habitats. The United States Fish and Wildlife Service implements and enforces the Act. In 2024, the U.S. ranked 35th among 180 countries in the Environmental Performance Index. Government and politics The United States is a federal republic of 50 states and a federal capital district, Washington, D.C. The U.S. asserts sovereignty over five unincorporated territories and several uninhabited island possessions. It is the world's oldest surviving federation, and its presidential system of federal government has been adopted, in whole or in part, by many newly independent states worldwide following their decolonization. The Constitution of the United States serves as the country's supreme legal document. Most scholars describe the United States as a liberal democracy.[r] Composed of three branches, all headquartered in Washington, D.C., the federal government is the national government of the United States. The U.S. Constitution establishes a separation of powers intended to provide a system of checks and balances to prevent any of the three branches from becoming supreme. The three-branch system is known as the presidential system, in contrast to the parliamentary system where the executive is part of the legislative body. Many countries around the world adopted this aspect of the 1789 Constitution of the United States, especially in the postcolonial Americas. In the U.S. federal system, sovereign powers are shared between three levels of government specified in the Constitution: the federal government, the states, and Indian tribes. The U.S. also asserts sovereignty over five permanently inhabited territories: American Samoa, Guam, the Northern Mariana Islands, Puerto Rico, and the U.S. Virgin Islands. Residents of the 50 states are governed by their elected state government, under state constitutions compatible with the national constitution, and by elected local governments that are administrative divisions of a state. States are subdivided into counties or county equivalents, and (except for Hawaii) further divided into municipalities, each administered by elected representatives. The District of Columbia is a federal district containing the U.S. capital, Washington, D.C. The federal district is an administrative division of the federal government. Indian country is made up of 574 federally recognized tribes and 326 Indian reservations. They hold a government-to-government relationship with the U.S. federal government in Washington and are legally defined as domestic dependent nations with inherent tribal sovereignty rights. In addition to the five major territories, the U.S. also asserts sovereignty over the United States Minor Outlying Islands in the Pacific Ocean and the Caribbean. The seven undisputed islands without permanent populations are Baker Island, Howland Island, Jarvis Island, Johnston Atoll, Kingman Reef, Midway Atoll, and Palmyra Atoll. U.S. sovereignty over the unpopulated Bajo Nuevo Bank, Navassa Island, Serranilla Bank, and Wake Island is disputed. The Constitution is silent on political parties. However, they developed independently in the 18th century with the Federalist and Anti-Federalist parties. Since then, the United States has operated as a de facto two-party system, though the parties have changed over time. Since the mid-19th century, the two main national parties have been the Democratic Party and the Republican Party. The former is perceived as relatively liberal in its political platform while the latter is perceived as relatively conservative in its platform. The United States has an established structure of foreign relations, with the world's second-largest diplomatic corps as of 2024[update]. It is a permanent member of the United Nations Security Council and home to the United Nations headquarters. The United States is a member of the G7, G20, and OECD intergovernmental organizations. Almost all countries have embassies and many have consulates (official representatives) in the country. Likewise, nearly all countries host formal diplomatic missions with the United States, except Iran, North Korea, and Bhutan. Though Taiwan does not have formal diplomatic relations with the U.S., it maintains close unofficial relations. The United States regularly supplies Taiwan with military equipment to deter potential Chinese aggression. Its geopolitical attention also turned to the Indo-Pacific when the United States joined the Quadrilateral Security Dialogue with Australia, India, and Japan. The United States has a "Special Relationship" with the United Kingdom and strong ties with Canada, Australia, New Zealand, the Philippines, Japan, South Korea, Israel, and several European Union countries such as France, Italy, Germany, Spain, and Poland. The U.S. works closely with its NATO allies on military and national security issues, and with countries in the Americas through the Organization of American States and the United States–Mexico–Canada Free Trade Agreement. The U.S. exercises full international defense authority and responsibility for Micronesia, the Marshall Islands, and Palau through the Compact of Free Association. It has increasingly conducted strategic cooperation with India, while its ties with China have steadily deteriorated. Beginning in 2014, the U.S. had become a key ally of Ukraine. After Donald Trump was elected U.S. president in 2024, he sought to negotiate an end to the Russo-Ukrainian War. He paused all military aid to Ukraine in March 2025, although the aid resumed later. Trump also ended U.S. intelligence sharing with the country, but this too was eventually restored. The president is the commander-in-chief of the United States Armed Forces and appoints its leaders, the secretary of defense and the Joint Chiefs of Staff. The Department of Defense, headquartered at the Pentagon near Washington, D.C., administers five of the six service branches, which are made up of the U.S. Army, Marine Corps, Navy, Air Force, and Space Force. The Coast Guard is administered by the Department of Homeland Security in peacetime and can be transferred to the Department of the Navy in wartime. Total strength of the entire military is about 1.3 million active duty with an additional 400,000 in reserve. The United States spent $997 billion on its military in 2024, which is by far the largest amount of any country, making up 37% of global military spending and accounting for 3.4% of the country's GDP. The U.S. possesses 42% of the world's nuclear weapons—the second-largest stockpile after that of Russia. The U.S. military is widely regarded as the most powerful and advanced in the world. The United States has the third-largest combined armed forces in the world, behind the Chinese People's Liberation Army and Indian Armed Forces. The U.S. military operates about 800 bases and facilities abroad, and maintains deployments greater than 100 active duty personnel in 25 foreign countries. The United States has engaged in over 400 military interventions since its founding in 1776, with over half of these occurring between 1950 and 2019 and 25% occurring in the post-Cold War era. State defense forces (SDFs) are military units that operate under the sole authority of a state government. SDFs are authorized by state and federal law but are under the command of the state's governor. By contrast, the 54 U.S. National Guard organizations[t] fall under the dual control of state or territorial governments and the federal government; their units can also become federalized entities, but SDFs cannot be federalized. The National Guard personnel of a state or territory can be federalized by the president under the National Defense Act Amendments of 1933; this legislation created the Guard and provides for the integration of Army National Guard and Air National Guard units and personnel into the U.S. Army and (since 1947) the U.S. Air Force. The total number of National Guard members is about 430,000, while the estimated combined strength of SDFs is less than 10,000. There are about 18,000 U.S. police agencies from local to national level in the United States. Law in the United States is mainly enforced by local police departments and sheriff departments in their municipal or county jurisdictions. The state police departments have authority in their respective state, and federal agencies such as the Federal Bureau of Investigation (FBI) and the U.S. Marshals Service have national jurisdiction and specialized duties, such as protecting civil rights, national security, enforcing U.S. federal courts' rulings and federal laws, and interstate criminal activity. State courts conduct almost all civil and criminal trials, while federal courts adjudicate the much smaller number of civil and criminal cases that relate to federal law. There is no unified "criminal justice system" in the United States. The American prison system is largely heterogenous, with thousands of relatively independent systems operating across federal, state, local, and tribal levels. In 2025, "these systems hold nearly 2 million people in 1,566 state prisons, 98 federal prisons, 3,116 local jails, 1,277 juvenile correctional facilities, 133 immigration detention facilities, and 80 Indian country jails, as well as in military prisons, civil commitment centers, state psychiatric hospitals, and prisons in the U.S. territories." Despite disparate systems of confinement, four main institutions dominate: federal prisons, state prisons, local jails, and juvenile correctional facilities. Federal prisons are run by the Federal Bureau of Prisons and hold pretrial detainees as well as people who have been convicted of federal crimes. State prisons, run by the department of corrections of each state, hold people sentenced and serving prison time (usually longer than one year) for felony offenses. Local jails are county or municipal facilities that incarcerate defendants prior to trial; they also hold those serving short sentences (typically under a year). Juvenile correctional facilities are operated by local or state governments and serve as longer-term placements for any minor adjudicated as delinquent and ordered by a judge to be confined. In January 2023, the United States had the sixth-highest per capita incarceration rate in the world—531 people per 100,000 inhabitants—and the largest prison and jail population in the world, with more than 1.9 million people incarcerated. An analysis of the World Health Organization Mortality Database from 2010 showed U.S. homicide rates "were 7 times higher than in other high-income countries, driven by a gun homicide rate that was 25 times higher". Economy The U.S. has a highly developed mixed economy that has been the world's largest nominally since about 1890. Its 2024 gross domestic product (GDP)[e] of more than $29 trillion constituted over 25% of nominal global economic output, or 15% at purchasing power parity (PPP). From 1983 to 2008, U.S. real compounded annual GDP growth was 3.3%, compared to a 2.3% weighted average for the rest of the G7. The country ranks first in the world by nominal GDP, second when adjusted for purchasing power parities (PPP), and ninth by PPP-adjusted GDP per capita. In February 2024, the total U.S. federal government debt was $34.4 trillion. Of the world's 500 largest companies by revenue, 138 were headquartered in the U.S. in 2025, the highest number of any country. The U.S. dollar is the currency most used in international transactions and the world's foremost reserve currency, backed by the country's dominant economy, its military, the petrodollar system, its large U.S. treasuries market, and its linked eurodollar. Several countries use it as their official currency, and in others it is the de facto currency. The U.S. has free trade agreements with several countries, including the USMCA. Although the United States has reached a post-industrial level of economic development and is often described as having a service economy, it remains a major industrial power; in 2024, the U.S. manufacturing sector was the world's second-largest by value output after China's. New York City is the world's principal financial center, and its metropolitan area is the world's largest metropolitan economy. The New York Stock Exchange and Nasdaq, both located in New York City, are the world's two largest stock exchanges by market capitalization and trade volume. The United States is at the forefront of technological advancement and innovation in many economic fields, especially in artificial intelligence; electronics and computers; pharmaceuticals; and medical, aerospace and military equipment. The country's economy is fueled by abundant natural resources, a well-developed infrastructure, and high productivity. The largest trading partners of the United States are the European Union, Mexico, Canada, China, Japan, South Korea, the United Kingdom, Vietnam, India, and Taiwan. The United States is the world's largest importer and second-largest exporter.[u] It is by far the world's largest exporter of services. Americans have the highest average household and employee income among OECD member states, and the fourth-highest median household income in 2023, up from sixth-highest in 2013. With personal consumption expenditures of over $18.5 trillion in 2023, the U.S. has a heavily consumer-driven economy and is the world's largest consumer market. The U.S. ranked first in the number of dollar billionaires and millionaires in 2023, with 735 billionaires and nearly 22 million millionaires. Wealth in the United States is highly concentrated; in 2011, the richest 10% of the adult population owned 72% of the country's household wealth, while the bottom 50% owned just 2%. U.S. wealth inequality increased substantially since the late 1980s, and income inequality in the U.S. reached a record high in 2019. In 2024, the country had some of the highest wealth and income inequality levels among OECD countries. Since the 1970s, there has been a decoupling of U.S. wage gains from worker productivity. In 2016, the top fifth of earners took home more than half of all income, giving the U.S. one of the widest income distributions among OECD countries. There were about 771,480 homeless persons in the U.S. in 2024. In 2022, 6.4 million children experienced food insecurity. Feeding America estimates that around one in five, or approximately 13 million, children experience hunger in the U.S. and do not know where or when they will get their next meal. Also in 2022, about 37.9 million people, or 11.5% of the U.S. population, were living in poverty. The United States has a smaller welfare state and redistributes less income through government action than most other high-income countries. It is the only advanced economy that does not guarantee its workers paid vacation nationally and one of a few countries in the world without federal paid family leave as a legal right. The United States has a higher percentage of low-income workers than almost any other developed country, largely because of a weak collective bargaining system and lack of government support for at-risk workers. The United States has been a leader in technological innovation since the late 19th century and scientific research since the mid-20th century. Methods for producing interchangeable parts and the establishment of a machine tool industry enabled the large-scale manufacturing of U.S. consumer products in the late 19th century. By the early 20th century, factory electrification, the introduction of the assembly line, and other labor-saving techniques created the system of mass production. In the 21st century, the United States continues to be one of the world's foremost scientific powers, though China has emerged as a major competitor in many fields. The U.S. has the highest research and development expenditures of any country and ranks ninth as a percentage of GDP. In 2022, the United States was (after China) the country with the second-highest number of published scientific papers. In 2021, the U.S. ranked second (also after China) by the number of patent applications, and third by trademark and industrial design applications (after China and Germany), according to World Intellectual Property Indicators. In 2025 the United States ranked third (after Switzerland and Sweden) in the Global Innovation Index. The United States is considered to be a world leader in the development of artificial intelligence technology. In 2023, the United States was ranked the second most technologically advanced country in the world (after South Korea) by Global Finance magazine. The United States has maintained a space program since the late 1950s, beginning with the establishment of the National Aeronautics and Space Administration (NASA) in 1958. NASA's Apollo program (1961–1972) achieved the first crewed Moon landing with the 1969 Apollo 11 mission; it remains one of the agency's most significant milestones. Other major endeavors by NASA include the Space Shuttle program (1981–2011), the Voyager program (1972–present), the Hubble and James Webb space telescopes (launched in 1990 and 2021, respectively), and the multi-mission Mars Exploration Program (Spirit and Opportunity, Curiosity, and Perseverance). NASA is one of five agencies collaborating on the International Space Station (ISS); U.S. contributions to the ISS include several modules, including Destiny (2001), Harmony (2007), and Tranquility (2010), as well as ongoing logistical and operational support. The United States private sector dominates the global commercial spaceflight industry. Prominent American spaceflight contractors include Blue Origin, Boeing, Lockheed Martin, Northrop Grumman, and SpaceX. NASA programs such as the Commercial Crew Program, Commercial Resupply Services, Commercial Lunar Payload Services, and NextSTEP have facilitated growing private-sector involvement in American spaceflight. In 2023, the United States received approximately 84% of its energy from fossil fuel, and its largest source of energy was petroleum (38%), followed by natural gas (36%), renewable sources (9%), coal (9%), and nuclear power (9%). In 2022, the United States constituted about 4% of the world's population, but consumed around 16% of the world's energy. The U.S. ranks as the second-highest emitter of greenhouse gases behind China. The U.S. is the world's largest producer of nuclear power, generating around 30% of the world's nuclear electricity. It also has the highest number of nuclear power reactors of any country. From 2024, the U.S. plans to triple its nuclear power capacity by 2050. The United States' 4 million miles (6.4 million kilometers) of road network, owned almost entirely by state and local governments, is the longest in the world. The extensive Interstate Highway System that connects all major U.S. cities is funded mostly by the federal government but maintained by state departments of transportation. The system is further extended by state highways and some private toll roads. The U.S. is among the top ten countries with the highest vehicle ownership per capita (850 vehicles per 1,000 people) in 2022. A 2022 study found that 76% of U.S. commuters drive alone and 14% ride a bicycle, including bike owners and users of bike-sharing networks. About 11% use some form of public transportation. Public transportation in the United States is well developed in the largest urban areas, notably New York City, Washington, D.C., Boston, Philadelphia, Chicago, and San Francisco; otherwise, coverage is generally less extensive than in most other developed countries. The U.S. also has many relatively car-dependent localities. Long-distance intercity travel is provided primarily by airlines, but travel by rail is more common along the Northeast Corridor, the only high-speed rail in the U.S. that meets international standards. Amtrak, the country's government-sponsored national passenger rail company, has a relatively sparse network compared to that of Western European countries. Service is concentrated in the Northeast, California, the Midwest, the Pacific Northwest, and Virginia/Southeast. The United States has an extensive air transportation network. U.S. civilian airlines are all privately owned. The three largest airlines in the world, by total number of passengers carried, are U.S.-based; American Airlines became the global leader after its 2013 merger with US Airways. Of the 50 busiest airports in the world, 16 are in the United States, as well as five of the top 10. The world's busiest airport by passenger volume is Hartsfield–Jackson Atlanta International in Atlanta, Georgia. In 2022, most of the 19,969 U.S. airports were owned and operated by local government authorities, and there are also some private airports. Some 5,193 are designated as "public use", including for general aviation. The Transportation Security Administration (TSA) has provided security at most major airports since 2001. The country's rail transport network, the longest in the world at 182,412.3 mi (293,564.2 km), handles mostly freight (in contrast to more passenger-centered rail in Europe). Because they are often privately owned operations, U.S. railroads lag behind those of the rest of the world in terms of electrification. The country's inland waterways are the world's fifth-longest, totaling 25,482 mi (41,009 km). They are used extensively for freight, recreation, and a small amount of passenger traffic. Of the world's 50 busiest container ports, four are located in the United States, with the busiest in the country being the Port of Los Angeles. Demographics The U.S. Census Bureau reported 331,449,281 residents on April 1, 2020,[v] making the United States the third-most-populous country in the world, after India and China. The Census Bureau's official 2025 population estimate was 341,784,857, an increase of 3.1% since the 2020 census. According to the Bureau's U.S. Population Clock, on July 1, 2024, the U.S. population had a net gain of one person every 16 seconds, or about 5400 people per day. In 2023, 51% of Americans age 15 and over were married, 6% were widowed, 10% were divorced, and 34% had never been married. In 2023, the total fertility rate for the U.S. stood at 1.6 children per woman, and, at 23%, it had the world's highest rate of children living in single-parent households in 2019. Most Americans live in the suburbs of major metropolitan areas. The United States has a diverse population; 37 ancestry groups have more than one million members. White Americans with ancestry from Europe, the Middle East, or North Africa form the largest racial and ethnic group at 57.8% of the United States population. Hispanic and Latino Americans form the second-largest group and are 18.7% of the United States population. African Americans constitute the country's third-largest ancestry group and are 12.1% of the total U.S. population. Asian Americans are the country's fourth-largest group, composing 5.9% of the United States population. The country's 3.7 million Native Americans account for about 1%, and some 574 native tribes are recognized by the federal government. In 2024, the median age of the United States population was 39.1 years. While many languages and dialects are spoken in the United States, English is by far the most commonly spoken and written. De facto, English is the official language of the United States, and in 2025, Executive Order 14224 declared English official. However, the U.S. has never had a de jure official language, as Congress has never passed a law to designate English as official for all three federal branches. Some laws, such as U.S. naturalization requirements, nonetheless standardize English. Twenty-eight states and the United States Virgin Islands have laws that designate English as the sole official language; 19 states and the District of Columbia have no official language. Three states and four U.S. territories have recognized local or indigenous languages in addition to English: Hawaii (Hawaiian), Alaska (twenty Native languages),[w] South Dakota (Sioux), American Samoa (Samoan), Puerto Rico (Spanish), Guam (Chamorro), and the Northern Mariana Islands (Carolinian and Chamorro). In total, 169 Native American languages are spoken in the United States. In Puerto Rico, Spanish is more widely spoken than English. According to the American Community Survey (2020), some 245.4 million people in the U.S. age five and older spoke only English at home. About 41.2 million spoke Spanish at home, making it the second most commonly used language. Other languages spoken at home by one million people or more include Chinese (3.40 million), Tagalog (1.71 million), Vietnamese (1.52 million), Arabic (1.39 million), French (1.18 million), Korean (1.07 million), and Russian (1.04 million). German, spoken by 1 million people at home in 2010, fell to 857,000 total speakers in 2020. America's immigrant population is by far the world's largest in absolute terms. In 2022, there were 87.7 million immigrants and U.S.-born children of immigrants in the United States, accounting for nearly 27% of the overall U.S. population. In 2017, out of the U.S. foreign-born population, some 45% (20.7 million) were naturalized citizens, 27% (12.3 million) were lawful permanent residents, 6% (2.2 million) were temporary lawful residents, and 23% (10.5 million) were unauthorized immigrants. In 2019, the top countries of origin for immigrants were Mexico (24% of immigrants), India (6%), China (5%), the Philippines (4.5%), and El Salvador (3%). In fiscal year 2022, over one million immigrants (most of whom entered through family reunification) were granted legal residence. The undocumented immigrant population in the U.S. reached a record high of 14 million in 2023. The First Amendment guarantees the free exercise of religion in the country and forbids Congress from passing laws respecting its establishment. Religious practice is widespread, among the most diverse in the world, and profoundly vibrant. The country has the world's largest Christian population, which includes the fourth-largest population of Catholics. Other notable faiths include Judaism, Buddhism, Hinduism, Islam, New Age, and Native American religions. Religious practice varies significantly by region. "Ceremonial deism" is common in American culture. The overwhelming majority of Americans believe in a higher power or spiritual force, engage in spiritual practices such as prayer, and consider themselves religious or spiritual. In the Southern United States' "Bible Belt", evangelical Protestantism plays a significant role culturally; New England and the Western United States tend to be more secular. Mormonism, a Restorationist movement founded in the U.S. in 1847, is the predominant religion in Utah and a major religion in Idaho. About 82% of Americans live in metropolitan areas, particularly in suburbs; about half of those reside in cities with populations over 50,000. In 2022, 333 incorporated municipalities had populations over 100,000, nine cities had more than one million residents, and four cities—New York City, Los Angeles, Chicago, and Houston—had populations exceeding two million. Many U.S. metropolitan populations are growing rapidly, particularly in the South and West. According to the Centers for Disease Control and Prevention (CDC), average U.S. life expectancy at birth reached 79.0 years in 2024, its highest recorded level. This was an increase of 0.6 years over 2023. The CDC attributed the improvement to a significant fall in the number of fatal drug overdoses in the country, noting that "heart disease continues to be the leading cause of death in the United States, followed by cancer and unintentional injuries." In 2024, life expectancy at birth for American men rose to 76.5 years (+0.7 years compared to 2023), while life expectancy for women was 81.4 years (+0.3 years). Starting in 1998, life expectancy in the U.S. fell behind that of other wealthy industrialized countries, and Americans' "health disadvantage" gap has been increasing ever since. The Commonwealth Fund reported in 2020 that the U.S. had the highest suicide rate among high-income countries. Approximately one-third of the U.S. adult population is obese and another third is overweight. The U.S. healthcare system far outspends that of any other country, measured both in per capita spending and as a percentage of GDP, but attains worse healthcare outcomes when compared to peer countries for reasons that are debated. The United States is the only developed country without a system of universal healthcare, and a significant proportion of the population that does not carry health insurance. Government-funded healthcare coverage for the poor (Medicaid) and for those age 65 and older (Medicare) is available to Americans who meet the programs' income or age qualifications. In 2010, then-President Obama passed the Patient Protection and Affordable Care Act.[x] Abortion in the United States is not federally protected, and is illegal or restricted in 17 states. American primary and secondary education, known in the U.S. as K–12 ("kindergarten through 12th grade"), is decentralized. School systems are operated by state, territorial, and sometimes municipal governments and regulated by the U.S. Department of Education. In general, children are required to attend school or an approved homeschool from the age of five or six (kindergarten or first grade) until they are 18 years old. This often brings students through the 12th grade, the final year of a U.S. high school, but some states and territories allow them to leave school earlier, at age 16 or 17. The U.S. spends more on education per student than any other country, an average of $18,614 per year per public elementary and secondary school student in 2020–2021. Among Americans age 25 and older, 92.2% graduated from high school, 62.7% attended some college, 37.7% earned a bachelor's degree, and 14.2% earned a graduate degree. The U.S. literacy rate is near-universal. The U.S. has produced the most Nobel Prize winners of any country, with 411 (having won 413 awards). U.S. tertiary or higher education has earned a global reputation. Many of the world's top universities, as listed by various ranking organizations, are in the United States, including 19 of the top 25. American higher education is dominated by state university systems, although the country's many private universities and colleges enroll about 20% of all American students. Local community colleges generally offer open admissions, lower tuition, and coursework leading to a two-year associate degree or a non-degree certificate. As for public expenditures on higher education, the U.S. spends more per student than the OECD average, and Americans spend more than all nations in combined public and private spending. Colleges and universities directly funded by the federal government do not charge tuition and are limited to military personnel and government employees, including: the U.S. service academies, the Naval Postgraduate School, and military staff colleges. Despite some student loan forgiveness programs in place, student loan debt increased by 102% between 2010 and 2020, and exceeded $1.7 trillion in 2022. Culture and society The United States is home to a wide variety of ethnic groups, traditions, and customs. The country has been described as having the values of individualism and personal autonomy, as well as a strong work ethic and competitiveness. Voluntary altruism towards others also plays a major role; according to a 2016 study by the Charities Aid Foundation, Americans donated 1.44% of total GDP to charity—the highest rate in the world by a large margin. Americans have traditionally been characterized by a unifying political belief in an "American Creed" emphasizing consent of the governed, liberty, equality under the law, democracy, social equality, property rights, and a preference for limited government. The U.S. has acquired significant hard and soft power through its diplomatic influence, economic power, military alliances, and cultural exports such as American movies, music, video games, sports, and food. The influence that the United States exerts on other countries through soft power is referred to as Americanization. Nearly all present Americans or their ancestors came from Europe, Africa, or Asia (the "Old World") within the past five centuries. Mainstream American culture is a Western culture largely derived from the traditions of European immigrants with influences from many other sources, such as traditions brought by slaves from Africa. More recent immigration from Asia and especially Latin America has added to a cultural mix that has been described as a homogenizing melting pot, and a heterogeneous salad bowl, with immigrants contributing to, and often assimilating into, mainstream American culture. Under the First Amendment to the Constitution, the United States is considered to have the strongest protections of free speech of any country. Flag desecration, hate speech, blasphemy, and lese majesty are all forms of protected expression. A 2016 Pew Research Center poll found that Americans were the most supportive of free expression of any polity measured. Additionally, they are the "most supportive of freedom of the press and the right to use the Internet without government censorship". The U.S. is a socially progressive country with permissive attitudes surrounding human sexuality. LGBTQ rights in the United States are among the most advanced by global standards. The American Dream, or the perception that Americans enjoy high levels of social mobility, plays a key role in attracting immigrants. Whether this perception is accurate has been a topic of debate. While mainstream culture holds that the United States is a classless society, scholars identify significant differences between the country's social classes, affecting socialization, language, and values. Americans tend to greatly value socioeconomic achievement, but being ordinary or average is promoted by some as a noble condition as well. The National Foundation on the Arts and the Humanities is an agency of the United States federal government that was established in 1965 with the purpose to "develop and promote a broadly conceived national policy of support for the humanities and the arts in the United States, and for institutions which preserve the cultural heritage of the United States." It is composed of four sub-agencies: Colonial American authors were influenced by John Locke and other Enlightenment philosophers. The American Revolutionary Period (1765–1783) is notable for the political writings of Benjamin Franklin, Alexander Hamilton, Thomas Paine, and Thomas Jefferson. Shortly before and after the Revolutionary War, the newspaper rose to prominence, filling a demand for anti-British national literature. An early novel is William Hill Brown's The Power of Sympathy, published in 1791. Writer and critic John Neal in the early- to mid-19th century helped advance America toward a unique literature and culture by criticizing predecessors such as Washington Irving for imitating their British counterparts, and by influencing writers such as Edgar Allan Poe, who took American poetry and short fiction in new directions. Ralph Waldo Emerson and Margaret Fuller pioneered the influential Transcendentalism movement; Henry David Thoreau, author of Walden, was influenced by this movement. The conflict surrounding abolitionism inspired writers, like Harriet Beecher Stowe, and authors of slave narratives, such as Frederick Douglass. Nathaniel Hawthorne's The Scarlet Letter (1850) explored the dark side of American history, as did Herman Melville's Moby-Dick (1851). Major American poets of the 19th century American Renaissance include Walt Whitman, Melville, and Emily Dickinson. Mark Twain was the first major American writer to be born in the West. Henry James achieved international recognition with novels like The Portrait of a Lady (1881). As literacy rates rose, periodicals published more stories centered around industrial workers, women, and the rural poor. Naturalism, regionalism, and realism were the major literary movements of the period. While modernism generally took on an international character, modernist authors working within the United States more often rooted their work in specific regions, peoples, and cultures. Following the Great Migration to northern cities, African-American and black West Indian authors of the Harlem Renaissance developed an independent tradition of literature that rebuked a history of inequality and celebrated black culture. An important cultural export during the Jazz Age, these writings were a key influence on Négritude, a philosophy emerging in the 1930s among francophone writers of the African diaspora. In the 1950s, an ideal of homogeneity led many authors to attempt to write the Great American Novel, while the Beat Generation rejected this conformity, using styles that elevated the impact of the spoken word over mechanics to describe drug use, sexuality, and the failings of society. Contemporary literature is more pluralistic than in previous eras, with the closest thing to a unifying feature being a trend toward self-conscious experiments with language. Twelve American laureates have won the Nobel Prize in Literature. Media in the United States is broadly uncensored, with the First Amendment providing significant protections, as reiterated in New York Times Co. v. United States. The four major broadcasters in the U.S. are the National Broadcasting Company (NBC), Columbia Broadcasting System (CBS), American Broadcasting Company (ABC), and Fox Broadcasting Company (Fox). The four major broadcast television networks are all commercial entities. The U.S. cable television system offers hundreds of channels catering to a variety of niches. In 2021, about 83% of Americans over age 12 listened to broadcast radio, while about 40% listened to podcasts. In the prior year, there were 15,460 licensed full-power radio stations in the U.S. according to the Federal Communications Commission (FCC). Much of the public radio broadcasting is supplied by National Public Radio (NPR), incorporated in February 1970 under the Public Broadcasting Act of 1967. U.S. newspapers with a global reach and reputation include The Wall Street Journal, The New York Times, The Washington Post, and USA Today. About 800 publications are produced in Spanish. With few exceptions, newspapers are privately owned, either by large chains such as Gannett or McClatchy, which own dozens or even hundreds of newspapers; by small chains that own a handful of papers; or, in an increasingly rare situation, by individuals or families. Major cities often have alternative newspapers to complement the mainstream daily papers, such as The Village Voice in New York City and LA Weekly in Los Angeles. The five most-visited websites in the world are Google, YouTube, Facebook, Instagram, and ChatGPT—all of them American-owned. Other popular platforms used include X (formerly Twitter) and Amazon. In 2025, the U.S. was the world's second-largest video game market by revenue (after China). In 2015, the U.S. video game industry consisted of 2,457 companies that employed around 220,000 jobs and generated $30.4 billion in revenue. There are 444 game publishers, developers, and hardware companies in California alone. According to the Game Developers Conference (GDC), the U.S. is the top location for video game development, with 58% of the world's game developers based there in 2025. The United States is well known for its theater. Mainstream theater in the United States derives from the old European theatrical tradition and has been heavily influenced by the British theater. By the middle of the 19th century, America had created new distinct dramatic forms in the Tom Shows, the showboat theater and the minstrel show. The central hub of the American theater scene is the Theater District in Manhattan, with its divisions of Broadway, off-Broadway, and off-off-Broadway. Many movie and television celebrities have gotten their big break working in New York productions. Outside New York City, many cities have professional regional or resident theater companies that produce their own seasons. The biggest-budget theatrical productions are musicals. U.S. theater has an active community theater culture. The Tony Awards recognizes excellence in live Broadway theater and are presented at an annual ceremony in Manhattan. The awards are given for Broadway productions and performances. One is also given for regional theater. Several discretionary non-competitive awards are given as well, including a Special Tony Award, the Tony Honors for Excellence in Theatre, and the Isabelle Stevenson Award. Folk art in colonial America grew out of artisanal craftsmanship in communities that allowed commonly trained people to individually express themselves. It was distinct from Europe's tradition of high art, which was less accessible and generally less relevant to early American settlers. Cultural movements in art and craftsmanship in colonial America generally lagged behind those of Western Europe. For example, the prevailing medieval style of woodworking and primitive sculpture became integral to early American folk art, despite the emergence of Renaissance styles in England in the late 16th and early 17th centuries. The new English styles would have been early enough to make a considerable impact on American folk art, but American styles and forms had already been firmly adopted. Not only did styles change slowly in early America, but there was a tendency for rural artisans there to continue their traditional forms longer than their urban counterparts did—and far longer than those in Western Europe. The Hudson River School was a mid-19th-century movement in the visual arts tradition of European naturalism. The 1913 Armory Show in New York City, an exhibition of European modernist art, shocked the public and transformed the U.S. art scene. American Realism and American Regionalism sought to reflect and give America new ways of looking at itself. Georgia O'Keeffe, Marsden Hartley, and others experimented with new and individualistic styles, which would become known as American modernism. Major artistic movements such as the abstract expressionism of Jackson Pollock and Willem de Kooning and the pop art of Andy Warhol and Roy Lichtenstein developed largely in the United States. Major photographers include Alfred Stieglitz, Edward Steichen, Dorothea Lange, Edward Weston, James Van Der Zee, Ansel Adams, and Gordon Parks. The tide of modernism and then postmodernism has brought global fame to American architects, including Frank Lloyd Wright, Philip Johnson, and Frank Gehry. The Metropolitan Museum of Art in Manhattan is the largest art museum in the United States and the fourth-largest in the world. American folk music encompasses numerous music genres, variously known as traditional music, traditional folk music, contemporary folk music, or roots music. Many traditional songs have been sung within the same family or folk group for generations, and sometimes trace back to such origins as the British Isles, mainland Europe, or Africa. The rhythmic and lyrical styles of African-American music in particular have influenced American music. Banjos were brought to America through the slave trade. Minstrel shows incorporating the instrument into their acts led to its increased popularity and widespread production in the 19th century. The electric guitar, first invented in the 1930s, and mass-produced by the 1940s, had an enormous influence on popular music, in particular due to the development of rock and roll. The synthesizer, turntablism, and electronic music were also largely developed in the U.S. Elements from folk idioms such as the blues and old-time music were adopted and transformed into popular genres with global audiences. Jazz grew from blues and ragtime in the early 20th century, developing from the innovations and recordings of composers such as W.C. Handy and Jelly Roll Morton. Louis Armstrong and Duke Ellington increased its popularity early in the 20th century. Country music developed in the 1920s, bluegrass and rhythm and blues in the 1940s, and rock and roll in the 1950s. In the 1960s, Bob Dylan emerged from the folk revival to become one of the country's most celebrated songwriters. The musical forms of punk and hip hop both originated in the United States in the 1970s. The United States has the world's largest music market, with a total retail value of $15.9 billion in 2022. Most of the world's major record companies are based in the U.S.; they are represented by the Recording Industry Association of America (RIAA). Mid-20th-century American pop stars, such as Frank Sinatra and Elvis Presley, became global celebrities and best-selling music artists, as have artists of the late 20th century, such as Michael Jackson, Madonna, Whitney Houston, and Mariah Carey, and of the early 21st century, such as Eminem, Britney Spears, Lady Gaga, Katy Perry, Taylor Swift and Beyoncé. The United States has the world's largest apparel market by revenue. Apart from professional business attire, American fashion is eclectic and predominantly informal. Americans' diverse cultural roots are reflected in their clothing; however, sneakers, jeans, T-shirts, and baseball caps are emblematic of American styles. New York, with its Fashion Week, is considered to be one of the "Big Four" global fashion capitals, along with Paris, Milan, and London. A study demonstrated that general proximity to Manhattan's Garment District has been synonymous with American fashion since its inception in the early 20th century. A number of well-known designer labels, among them Tommy Hilfiger, Ralph Lauren, Tom Ford and Calvin Klein, are headquartered in Manhattan. Labels cater to niche markets, such as preteens. New York Fashion Week is one of the most influential fashion shows in the world, and is held twice each year in Manhattan; the annual Met Gala, also in Manhattan, has been called the fashion world's "biggest night". The U.S. film industry has a worldwide influence and following. Hollywood, a district in central Los Angeles, the nation's second-most populous city, is also metonymous for the American filmmaking industry. The major film studios of the United States are the primary source of the most commercially successful movies selling the most tickets in the world. Largely centered in the New York City region from its beginnings in the late 19th century through the first decades of the 20th century, the U.S. film industry has since been primarily based in and around Hollywood. Nonetheless, American film companies have been subject to the forces of globalization in the 21st century, and an increasing number of films are made elsewhere. The Academy Awards, popularly known as "the Oscars", have been held annually by the Academy of Motion Picture Arts and Sciences since 1929, and the Golden Globe Awards have been held annually since January 1944. The industry peaked in what is commonly referred to as the "Golden Age of Hollywood", from the early sound period until the early 1960s, with screen actors such as John Wayne and Marilyn Monroe becoming iconic figures. In the 1970s, "New Hollywood", or the "Hollywood Renaissance", was defined by grittier films influenced by French and Italian realist pictures of the post-war period. The 21st century has been marked by the rise of American streaming platforms, which came to rival traditional cinema. Early settlers were introduced by Native Americans to foods such as turkey, sweet potatoes, corn, squash, and maple syrup. Of the most enduring and pervasive examples are variations of the native dish called succotash. Early settlers and later immigrants combined these with foods they were familiar with, such as wheat flour, beef, and milk, to create a distinctive American cuisine. New World crops, especially pumpkin, corn, potatoes, and turkey as the main course are part of a shared national menu on Thanksgiving, when many Americans prepare or purchase traditional dishes to celebrate the occasion. Characteristic American dishes such as apple pie, fried chicken, doughnuts, french fries, macaroni and cheese, ice cream, hamburgers, hot dogs, and American pizza derive from the recipes of various immigrant groups. Mexican dishes such as burritos and tacos preexisted the United States in areas later annexed from Mexico, and adaptations of Chinese cuisine as well as pasta dishes freely adapted from Italian sources are all widely consumed. American chefs have had a significant impact on society both domestically and internationally. In 1946, the Culinary Institute of America was founded by Katharine Angell and Frances Roth. This would become the United States' most prestigious culinary school, where many of the most talented American chefs would study prior to successful careers. The United States restaurant industry was projected at $899 billion in sales for 2020, and employed more than 15 million people, representing 10% of the nation's workforce directly. It is the country's second-largest private employer and the third-largest employer overall. The United States is home to over 220 Michelin star-rated restaurants, 70 of which are in New York City. Wine has been produced in what is now the United States since the 1500s, with the first widespread production beginning in what is now New Mexico in 1628. In the modern U.S., wine production is undertaken in all fifty states, with California producing 84 percent of all U.S. wine. With more than 1,100,000 acres (4,500 km2) under vine, the United States is the fourth-largest wine-producing country in the world, after Italy, Spain, and France. The classic American diner, a casual restaurant type originally intended for the working class, emerged during the 19th century from converted railroad dining cars made stationary. The diner soon evolved into purpose-built structures whose number expanded greatly in the 20th century. The American fast-food industry developed alongside the nation's car culture. American restaurants developed the drive-in format in the 1920s, which they began to replace with the drive-through format by the 1940s. American fast-food restaurant chains, such as McDonald's, Burger King, Chick-fil-A, Kentucky Fried Chicken, Dunkin' Donuts and many others, have numerous outlets around the world. The most popular spectator sports in the U.S. are American football, basketball, baseball, soccer, and ice hockey. Their premier leagues are, respectively, the National Football League, the National Basketball Association, Major League Baseball, Major League Soccer, and the National Hockey League, All these leagues enjoy wide-ranging domestic media coverage and, except for the MLS, all are considered the preeminent leagues in their respective sports in the world. While most major U.S. sports such as baseball and American football have evolved out of European practices, basketball, volleyball, skateboarding, and snowboarding are American inventions, many of which have become popular worldwide. Lacrosse and surfing arose from Native American and Native Hawaiian activities that predate European contact. The market for professional sports in the United States was approximately $69 billion in July 2013, roughly 50% larger than that of Europe, the Middle East, and Africa combined. American football is by several measures the most popular spectator sport in the United States. Although American football does not have a substantial following in other nations, the NFL does have the highest average attendance (67,254) of any professional sports league in the world. In the year 2024, the NFL generated over $23 billion, making them the most valued professional sports league in the United States and the world. Baseball has been regarded as the U.S. "national sport" since the late 19th century. The most-watched individual sports in the U.S. are golf and auto racing, particularly NASCAR and IndyCar. On the collegiate level, earnings for the member institutions exceed $1 billion annually, and college football and basketball attract large audiences, as the NCAA March Madness tournament and the College Football Playoff are some of the most watched national sporting events. In the U.S., the intercollegiate sports level serves as the main feeder system for professional and Olympic sports, with significant exceptions such as Minor League Baseball. This differs greatly from practices in nearly all other countries, where publicly and privately funded sports organizations serve this function. Eight Olympic Games have taken place in the United States. The 1904 Summer Olympics in St. Louis, Missouri, were the first-ever Olympic Games held outside of Europe. The Olympic Games will be held in the U.S. for a ninth time when Los Angeles hosts the 2028 Summer Olympics. U.S. athletes have won a total of 2,968 medals (1,179 gold) at the Olympic Games, the most of any country. In other international competition, the United States is the home of a number of prestigious events, including the America's Cup, World Baseball Classic, the U.S. Open, and the Masters Tournament. The U.S. men's national soccer team has qualified for eleven World Cups, while the women's national team has won the FIFA Women's World Cup and Olympic soccer tournament four and five times, respectively. The 1999 FIFA Women's World Cup was hosted by the United States. Its final match was attended by 90,185, setting the world record for largest women's sporting event crowd at the time. The United States hosted the 1994 FIFA World Cup and will co-host, along with Canada and Mexico, the 2026 FIFA World Cup. See also Notes References This article incorporates text from a free content work. Licensed under CC BY-SA IGO 3.0 (license statement/permission). Text taken from World Food and Agriculture – Statistical Yearbook 2023​, FAO, FAO. External links 40°N 100°W / 40°N 100°W / 40; -100 (United States of America)
========================================
[SOURCE: https://en.wikipedia.org/wiki/Depopulated_Palestinian_locations_in_Israel] | [TOKENS: 407]
Contents List of towns and villages depopulated during the 1947–1949 Palestine war During the 1947–1949 Palestine war, or the Nakba, around 400 Palestinian Arab towns and villages were forcibly depopulated by Israeli forces, with a majority being destroyed and left uninhabitable. Today these locations are all in Israel; many of the locations were repopulated by Jewish immigrants, with their place names replaced with Hebrew place names. Arabs remained in small numbers in some of the cities (Haifa, Jaffa and Acre); and Jerusalem was divided between Jordan and Israel. Around 30,000 Palestinians remained in Jerusalem in what became the Arab part of it (East Jerusalem). In addition, some 30,000 non-Jewish refugees relocated to East Jerusalem, while 5,000 Jewish refugees moved from the Old City to West Jerusalem on the Israeli side. An overwhelming number of the Arab residents who had lived in the cities that became a part of Israel and were renamed (Acre, Haifa, Safad, Tiberias, Ashkelon, Beersheba, Jaffa and Beisan) fled or were expelled. Most of the Palestinians who remain there are internally displaced people from the villages nearby. A number of the towns and villages were destroyed by Israeli forces in the aftermath of the 1948 war, but it was not until 1965 that more than 100 remaining locations – including many of the largest depopulated places – were demolished by the Israel Land Administration. There are more than 120 "village memorial books" documenting the history of the depopulated Palestinian villages. These books are based on accounts given by villagers. Rochelle A. Davis has described the authors as seeking "to pass on information about their villages and their values to coming generations". The towns and villages listed below are arranged according to the subdistricts of Mandatory Palestine in which they were situated. Table Other villages (not in table above) See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Life_(video_games)] | [TOKENS: 1260]
Contents Life (video games) In video games, a life is a play-turn that a player character has, defined as the period between start and end of play. Lives refer to a finite number of tries before the game ends with a game over. Sometimes the euphemisms chance, try, rest and continue are used, particularly in all-ages games, to avoid the morbid insinuation of losing one's "life". Generally, if the player loses all their health, they lose a life. Losing all lives usually grants the player character "game over", forcing them to either restart or stop playing. The number of lives a player is granted varies per game type. A finite number of lives became a common feature in arcade games and action games during the 1980s, and mechanics such as checkpoints and power-ups made the managing of lives a more strategic experience for players over time. Lives give novice players more chances to learn the mechanics of a video game, while allowing more advanced players to take more risks. History Lives may have originated from the pinball mechanic of having a limited number of balls. A finite number of lives (usually three) became a common feature in arcade games. The number of lives usually displayed on the screen (in arcade games, the character that is being played is also counted as a "life"). Much like in pinball games, the player's goal was usually to score as many points as possible with their limited number of lives. Taito's classic arcade video game Space Invaders (1978) is usually credited with introducing multiple lives to video games. Lives were important in these games because the desire to avoid the finality of the player character's death compelled players to insert more quarters, making the maximum amount of profit. Later, refinements of health, defense and other attributes, as well as power-ups, made managing the player character's life a more strategic experience and made lost health less of the handicap it was in early arcade games. Lives and game over screens became thought of as outmoded concepts and holdovers from arcade games that were unnecessary when players had already paid for the game. They also discouraged the player from playing the game fairly, with players in games such as Doom resorting to save scumming in order to preserve their lives rather than start from an in-game checkpoint with their lives depleted, and getting a game over can often cause players to permanently abandon a game instead of making another attempt at the level. Therefore, most modern games have completely abandoned the concept of player lives, instead simply restarting the player from the nearest checkpoint when they die, allowing them to undo or rewind their progress until such time as they are safe, as in Prince of Persia: The Sands of Time, or making saving the player from death contingent on successfully executing a QTE, as in Batman: Arkham Asylum. Usage It is common in action games for the player to have multiple lives and chances to earn more in-game. This way, a player can recover from making a disastrous mistake. Role-playing games and adventure games usually grant only one, but allow player-characters to reload a saved game. Lives set up the situation where dying is not necessarily the end of the game, allowing the player to take risks they might not take otherwise, or experiment with different strategies to find one that works. Multiple lives also allow novice players a chance to learn a game's mechanics before the game is over. Another reason to implement lives is that the ability to earn extra lives provide an additional reward incentive for the player. The way lives are shown varies from game to game. While pinball games often use an increasing counter ("Ball 1", "Ball 2", "Ball 3"), most other types of games with a lives system display a number of lives remaining. A particular variation is in whether the displayed number of lives includes the one currently being used, hence whether the game ends immediately when the life counter reaches zero or the player still has one chance left. Many older video games feature cheat codes that allow the player to gain extra lives without earning them throughout gameplay. One example is Contra, which added the option to input the Konami code to get 30 extra lives. In modern times, some free-to-play games, such as the Candy Crush Saga trilogy, capitalize on the multiple life system to create an opportunity to earn more microtransactions. In such games, a life is lost when the player fails a level, but once all lives are lost, the player is prevented from continuing the game for a temporary amount of time, instead of receiving a game over that would entail total failure or require a new beginning, as lives will re-generate automatically after a number of minutes or hours. Players can either wait for lives, attempt alternate activities to recover lives (such as asking for friends online to donate lives), or purchase items that can fully replenish lives or grant unlimited lives for a limited time to continue playing immediately.[citation needed] This system works like an "energy" meter for other free-to-play games, however, lives do not deplete when a level is successfully completed, unlike energy. An extra life or a 1-up is a video game item that increments the player character's number of lives. Because there are no universal game rules, the form 1-ups take varies from game to game, but are often rare and difficult items to acquire. The use of the term "1-up" to designate an extra life first appeared in Super Mario Bros., where a 1-Up could be obtained in several ways, including grabbing a green "1-Up Mushroom", collecting 100 coins, using a Koopa shell to kill eight or more consecutive enemies, and jumping on eight or more consecutive enemies without touching the ground. The term quickly caught on, seeing use in both home and arcade video games. A number of games included an exploitable design flaw called a "1-up loop", in which it is possible to consistently acquire two or more 1-ups between a certain checkpoint and the following checkpoint. The player can thus acquire two 1-ups, make the character die, and restart from the first checkpoint with a net gain of one life; this procedure can then be repeated for as many lives as the player desires. References
========================================
[SOURCE: https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_astronomy] | [TOKENS: 101]
Contents List of unsolved problems in astronomy This article is a list of notable unsolved problems in astronomy. Problems may be theoretical or experimental. Theoretical problems result from inability of current theories to explain observed phenomena or experimental results. Experimental problems result from inability to test or investigate a proposed theory. Other problems involve unique events or occurrences that have not repeated themselves with unclear causes. Planetary astronomy Stellar astronomy and astrophysics Galactic astronomy and astrophysics Black holes Cosmology Extraterrestrial life See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Python_(programming_language)#cite_note-pep-0498-101] | [TOKENS: 4314]
Contents Python (programming language) Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically type-checked and garbage-collected. It supports multiple programming paradigms, including structured (particularly procedural), object-oriented and functional programming. Guido van Rossum began working on Python in the late 1980s as a successor to the ABC programming language. Python 3.0, released in 2008, was a major revision and not completely backward-compatible with earlier versions. Beginning with Python 3.5, capabilities and keywords for typing were added to the language, allowing optional static typing. As of 2026[update], the Python Software Foundation supports Python 3.10, 3.11, 3.12, 3.13, and 3.14, following the project's annual release cycle and five-year support policy. Python 3.15 is currently in the alpha development phase, and the stable release is expected to come out in October 2026. Earlier versions in the 3.x series have reached end-of-life and no longer receive security updates. Python has gained widespread use in the machine learning community. It is widely taught as an introductory programming language. Since 2003, Python has consistently ranked in the top ten of the most popular programming languages in the TIOBE Programming Community Index, which ranks based on searches in 24 platforms. History Python was conceived in the late 1980s by Guido van Rossum at Centrum Wiskunde & Informatica (CWI) in the Netherlands. It was designed as a successor to the ABC programming language, which was inspired by SETL, capable of exception handling and interfacing with the Amoeba operating system. Python implementation began in December 1989. Van Rossum first released it in 1991 as Python 0.9.0. Van Rossum assumed sole responsibility for the project, as the lead developer, until 12 July 2018, when he announced his "permanent vacation" from responsibilities as Python's "benevolent dictator for life" (BDFL); this title was bestowed on him by the Python community to reflect his long-term commitment as the project's chief decision-maker. (He has since come out of retirement and is self-titled "BDFL-emeritus".) In January 2019, active Python core developers elected a five-member Steering Council to lead the project. The name Python derives from the British comedy series Monty Python's Flying Circus. (See § Naming.) Python 2.0 was released on 16 October 2000, featuring many new features such as list comprehensions, cycle-detecting garbage collection, reference counting, and Unicode support. Python 2.7's end-of-life was initially set for 2015, and then postponed to 2020 out of concern that a large body of existing code could not easily be forward-ported to Python 3. It no longer receives security patches or updates. While Python 2.7 and older versions are officially unsupported, a different unofficial Python implementation, PyPy, continues to support Python 2, i.e., "2.7.18+" (plus 3.11), with the plus signifying (at least some) "backported security updates". Python 3.0 was released on 3 December 2008, and was a major revision and not completely backward-compatible with earlier versions, with some new semantics and changed syntax. Python 2.7.18, released in 2020, was the last release of Python 2. Several releases in the Python 3.x series have added new syntax to the language, and made a few (considered very minor) backward-incompatible changes. As of January 2026[update], Python 3.14.3 is the latest stable release. All older 3.x versions had a security update down to Python 3.9.24 then again with 3.9.25, the final version in 3.9 series. Python 3.10 is, since November 2025, the oldest supported branch. Python 3.15 has an alpha released, and Android has an official downloadable executable available for Python 3.14. Releases receive two years of full support followed by three years of security support. Design philosophy and features Python is a multi-paradigm programming language. Object-oriented programming and structured programming are fully supported, and many of their features support functional programming and aspect-oriented programming – including metaprogramming and metaobjects. Many other paradigms are supported via extensions, including design by contract and logic programming. Python is often referred to as a 'glue language' because it is purposely designed to be able to integrate components written in other languages. Python uses dynamic typing and a combination of reference counting and a cycle-detecting garbage collector for memory management. It uses dynamic name resolution (late binding), which binds method and variable names during program execution. Python's design offers some support for functional programming in the "Lisp tradition". It has filter, map, and reduce functions; list comprehensions, dictionaries, sets, and generator expressions. The standard library has two modules (itertools and functools) that implement functional tools borrowed from Haskell and Standard ML. Python's core philosophy is summarized in the Zen of Python (PEP 20) written by Tim Peters, which includes aphorisms such as these: However, Python has received criticism for violating these principles and adding unnecessary language bloat. Responses to these criticisms note that the Zen of Python is a guideline rather than a rule. The addition of some new features had been controversial: Guido van Rossum resigned as Benevolent Dictator for Life after conflict about adding the assignment expression operator in Python 3.8. Nevertheless, rather than building all functionality into its core, Python was designed to be highly extensible via modules. This compact modularity has made it particularly popular as a means of adding programmable interfaces to existing applications. Van Rossum's vision of a small core language with a large standard library and easily extensible interpreter stemmed from his frustrations with ABC, which represented the opposite approach. Python claims to strive for a simpler, less-cluttered syntax and grammar, while giving developers a choice in their coding methodology. Python lacks do .. while loops, which Rossum considered harmful. In contrast to Perl's motto "there is more than one way to do it", Python advocates an approach where "there should be one – and preferably only one – obvious way to do it". In practice, however, Python provides many ways to achieve a given goal. There are at least three ways to format a string literal, with no certainty as to which one a programmer should use. Alex Martelli is a Fellow at the Python Software Foundation and Python book author; he wrote that "To describe something as 'clever' is not considered a compliment in the Python culture." Python's developers typically prioritize readability over performance. For example, they reject patches to non-critical parts of the CPython reference implementation that would offer increases in speed that do not justify the cost of clarity and readability.[failed verification] Execution speed can be improved by moving speed-critical functions to extension modules written in languages such as C, or by using a just-in-time compiler like PyPy. Also, it is possible to transpile to other languages. However, this approach either fails to achieve the expected speed-up, since Python is a very dynamic language, or only a restricted subset of Python is compiled (with potential minor semantic changes). Python is meant to be a fun language to use. This goal is reflected in the name – a tribute to the British comedy group Monty Python – and in playful approaches to some tutorials and reference materials. For instance, some code examples use the terms "spam" and "eggs" (in reference to a Monty Python sketch), rather than the typical terms "foo" and "bar". A common neologism in the Python community is pythonic, which has a broad range of meanings related to program style: Pythonic code may use Python idioms well; be natural or show fluency in the language; or conform with Python's minimalist philosophy and emphasis on readability. Syntax and semantics Python is meant to be an easily readable language. Its formatting is visually uncluttered and often uses English keywords where other languages use punctuation. Unlike many other languages, it does not use curly brackets to delimit blocks, and semicolons after statements are allowed but rarely used. It has fewer syntactic exceptions and special cases than C or Pascal. Python uses whitespace indentation, rather than curly brackets or keywords, to delimit blocks. An increase in indentation comes after certain statements; a decrease in indentation signifies the end of the current block. Thus, the program's visual structure accurately represents its semantic structure. This feature is sometimes termed the off-side rule. Some other languages use indentation this way; but in most, indentation has no semantic meaning. The recommended indent size is four spaces. Python's statements include the following: The assignment statement (=) binds a name as a reference to a separate, dynamically allocated object. Variables may subsequently be rebound at any time to any object. In Python, a variable name is a generic reference holder without a fixed data type; however, it always refers to some object with a type. This is called dynamic typing—in contrast to statically-typed languages, where each variable may contain only a value of a certain type. Python does not support tail call optimization or first-class continuations; according to Van Rossum, the language never will. However, better support for coroutine-like functionality is provided by extending Python's generators. Before 2.5, generators were lazy iterators; data was passed unidirectionally out of the generator. From Python 2.5 on, it is possible to pass data back into a generator function; and from version 3.3, data can be passed through multiple stack levels. Python's expressions include the following: In Python, a distinction between expressions and statements is rigidly enforced, in contrast to languages such as Common Lisp, Scheme, or Ruby. This distinction leads to duplicating some functionality, for example: A statement cannot be part of an expression; because of this restriction, expressions such as list and dict comprehensions (and lambda expressions) cannot contain statements. As a particular case, an assignment statement such as a = 1 cannot be part of the conditional expression of a conditional statement. Python uses duck typing, and it has typed objects but untyped variable names. Type constraints are not checked at definition time; rather, operations on an object may fail at usage time, indicating that the object is not of an appropriate type. Despite being dynamically typed, Python is strongly typed, forbidding operations that are poorly defined (e.g., adding a number and a string) rather than quietly attempting to interpret them. Python allows programmers to define their own types using classes, most often for object-oriented programming. New instances of classes are constructed by calling the class, for example, SpamClass() or EggsClass()); the classes are instances of the metaclass type (which is an instance of itself), thereby allowing metaprogramming and reflection. Before version 3.0, Python had two kinds of classes, both using the same syntax: old-style and new-style. Current Python versions support the semantics of only the new style. Python supports optional type annotations. These annotations are not enforced by the language, but may be used by external tools such as mypy to catch errors. Python includes a module typing including several type names for type annotations. Also, mypy supports a Python compiler called mypyc, which leverages type annotations for optimization. 1.33333 frozenset() Python includes conventional symbols for arithmetic operators (+, -, *, /), the floor-division operator //, and the modulo operator %. (With the modulo operator, a remainder can be negative, e.g., 4 % -3 == -2.) Also, Python offers the ** symbol for exponentiation, e.g. 5**3 == 125 and 9**0.5 == 3.0. Also, it offers the matrix‑multiplication operator @ . These operators work as in traditional mathematics; with the same precedence rules, the infix operators + and - can also be unary, to represent positive and negative numbers respectively. Division between integers produces floating-point results. The behavior of division has changed significantly over time: In Python terms, the / operator represents true division (or simply division), while the // operator represents floor division. Before version 3.0, the / operator represents classic division. Rounding towards negative infinity, though a different method than in most languages, adds consistency to Python. For instance, this rounding implies that the equation (a + b)//b == a//b + 1 is always true. Also, the rounding implies that the equation b*(a//b) + a%b == a is valid for both positive and negative values of a. As expected, the result of a%b lies in the half-open interval [0, b), where b is a positive integer; however, maintaining the validity of the equation requires that the result must lie in the interval (b, 0] when b is negative. Python provides a round function for rounding a float to the nearest integer. For tie-breaking, Python 3 uses the round to even method: round(1.5) and round(2.5) both produce 2. Python versions before 3 used the round-away-from-zero method: round(0.5) is 1.0, and round(-0.5) is −1.0. Python allows Boolean expressions that contain multiple equality relations to be consistent with general usage in mathematics. For example, the expression a < b < c tests whether a is less than b and b is less than c. C-derived languages interpret this expression differently: in C, the expression would first evaluate a < b, resulting in 0 or 1, and that result would then be compared with c. Python uses arbitrary-precision arithmetic for all integer operations. The Decimal type/class in the decimal module provides decimal floating-point numbers to a pre-defined arbitrary precision with several rounding modes. The Fraction class in the fractions module provides arbitrary precision for rational numbers. Due to Python's extensive mathematics library and the third-party library NumPy, the language is frequently used for scientific scripting in tasks such as numerical data processing and manipulation. Functions are created in Python by using the def keyword. A function is defined similarly to how it is called, by first providing the function name and then the required parameters. Here is an example of a function that prints its inputs: To assign a default value to a function parameter in case no actual value is provided at run time, variable-definition syntax can be used inside the function header. Code examples "Hello, World!" program: Program to calculate the factorial of a non-negative integer: Libraries Python's large standard library is commonly cited as one of its greatest strengths. For Internet-facing applications, many standard formats and protocols such as MIME and HTTP are supported. The language includes modules for creating graphical user interfaces, connecting to relational databases, generating pseudorandom numbers, arithmetic with arbitrary-precision decimals, manipulating regular expressions, and unit testing. Some parts of the standard library are covered by specifications—for example, the Web Server Gateway Interface (WSGI) implementation wsgiref follows PEP 333—but most parts are specified by their code, internal documentation, and test suites. However, because most of the standard library is cross-platform Python code, only a few modules must be altered or rewritten for variant implementations. As of 13 March 2025,[update] the Python Package Index (PyPI), the official repository for third-party Python software, contains over 614,339 packages. Development environments Most[which?] Python implementations (including CPython) include a read–eval–print loop (REPL); this permits the environment to function as a command line interpreter, with which users enter statements sequentially and receive results immediately. Also, CPython is bundled with an integrated development environment (IDE) called IDLE, which is oriented toward beginners.[citation needed] Other shells, including IDLE and IPython, add additional capabilities such as improved auto-completion, session-state retention, and syntax highlighting. Standard desktop IDEs include PyCharm, Spyder, and Visual Studio Code; there are web browser-based IDEs, such as the following environments: Implementations CPython is the reference implementation of Python. This implementation is written in C, meeting the C11 standard since version 3.11. Older versions use the C89 standard with several select C99 features, but third-party extensions are not limited to older C versions—e.g., they can be implemented using C11 or C++. CPython compiles Python programs into an intermediate bytecode, which is then executed by a virtual machine. CPython is distributed with a large standard library written in a combination of C and native Python. CPython is available for many platforms, including Windows and most modern Unix-like systems, including macOS (and Apple M1 Macs, since Python 3.9.1, using an experimental installer). Starting with Python 3.9, the Python installer intentionally fails to install on Windows 7 and 8; Windows XP was supported until Python 3.5, with unofficial support for VMS. Platform portability was one of Python's earliest priorities. During development of Python 1 and 2, even OS/2 and Solaris were supported; since that time, support has been dropped for many platforms. All current Python versions (since 3.7) support only operating systems that feature multithreading, by now supporting not nearly as many operating systems (dropping many outdated) than in the past. All alternative implementations have at least slightly different semantics. For example, an alternative may include unordered dictionaries, in contrast to other current Python versions. As another example in the larger Python ecosystem, PyPy does not support the full C Python API. Creating an executable with Python often is done by bundling an entire Python interpreter into the executable, which causes binary sizes to be massive for small programs, yet there exist implementations that are capable of truly compiling Python. Alternative implementations include the following: Stackless Python is a significant fork of CPython that implements microthreads. This implementation uses the call stack differently, thus allowing massively concurrent programs. PyPy also offers a stackless version. Just-in-time Python compilers have been developed, but are now unsupported: There are several compilers/transpilers to high-level object languages; the source language is unrestricted Python, a subset of Python, or a language similar to Python: There are also specialized compilers: Some older projects existed, as well as compilers not designed for use with Python 3.x and related syntax: A performance comparison among various Python implementations, using a non-numerical (combinatorial) workload, was presented at EuroSciPy '13. In addition, Python's performance relative to other programming languages is benchmarked by The Computer Language Benchmarks Game. There are several approaches to optimizing Python performance, despite the inherent slowness of an interpreted language. These approaches include the following strategies or tools: Language Development Python's development is conducted mostly through the Python Enhancement Proposal (PEP) process; this process is the primary mechanism for proposing major new features, collecting community input on issues, and documenting Python design decisions. Python coding style is covered in PEP 8. Outstanding PEPs are reviewed and commented on by the Python community and the steering council. Enhancement of the language corresponds with development of the CPython reference implementation. The mailing list python-dev is the primary forum for the language's development. Specific issues were originally discussed in the Roundup bug tracker hosted by the foundation. In 2022, all issues and discussions were migrated to GitHub. Development originally took place on a self-hosted source-code repository running Mercurial, until Python moved to GitHub in January 2017. CPython's public releases have three types, distinguished by which part of the version number is incremented: Many alpha, beta, and release-candidates are also released as previews and for testing before final releases. Although there is a rough schedule for releases, they are often delayed if the code is not ready yet. Python's development team monitors the state of the code by running a large unit test suite during development. The major academic conference on Python is PyCon. Also, there are special Python mentoring programs, such as PyLadies. Naming Python's name is inspired by the British comedy group Monty Python, whom Python creator Guido van Rossum enjoyed while developing the language. Monty Python references appear frequently in Python code and culture; for example, the metasyntactic variables often used in Python literature are spam and eggs, rather than the traditional foo and bar. Also, the official Python documentation contains various references to Monty Python routines. Python users are sometimes referred to as "Pythonistas". Languages influenced by Python See also Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Prime_Meridian] | [TOKENS: 2672]
Contents Prime meridian A prime meridian is a meridian (a line of longitude) in a geographic coordinate system at which longitude is defined to be 0°. On a spheroid, a prime meridian and its anti-meridian (the 180th meridian in a 360-degree system) form a great ellipse. This divides the body (e.g. Earth) into two hemispheres: the Eastern Hemisphere and the Western Hemisphere (for an east-west notational system). Unlike the equator, which also divides a spherical celestial body into two hemispheres, the prime meridian is astronomically arbitrary. For each of Earth's historic prime meridians, various conventions have been used or advocated in different regions throughout history, but none (unlike the prime meridian of Mars) have a basis in physical geography. Earth's current international standard prime meridian is the IERS Reference Meridian. It is derived, but differs slightly, from the Greenwich Meridian, the previous standard. Longitudes for the Earth and Moon are measured from their prime meridian (at 0°) to 180° east and west. For all other Solar System bodies, longitude is measured from 0° (their prime meridian) to 360°. West longitudes are used if the rotation of the body is prograde (or 'direct', like Earth), meaning that its direction of rotation is the same as that of its orbit. East longitudes are used if the rotation is retrograde. History The notion of longitude for Greeks was developed by the Greek Eratosthenes (c. 276 – 195 BCE) in Alexandria, and Hipparchus (c. 190 – 120 BCE) in Rhodes, and applied to a large number of cities by the geographer Strabo (64/63 BCE – c. 24 CE). Ptolemy (c. 90 – 168 CE) was the first geographer to use a consistent meridian for a world map, in his Geographia. Ptolemy used as his basis the "Fortunate Isles", a group of islands in the Atlantic, which are usually associated with the Canary Islands (13°W to 18°W), although his maps correspond more closely to the Cape Verde islands (22°W to 25°W). The main point is to be comfortably west of the western tip of Africa (17°30′W) as negative numbers were not yet in use. His prime meridian corresponds to 18°40′ west of Winchester (about 20°W) today. At that time the chief method of determining longitude was by using the reported times of lunar eclipses in different countries. One of the earliest known descriptions of standard time in India appeared in the 4th century CE astronomical treatise Surya Siddhanta. Postulating a spherical Earth, the book described the thousands years old customs of the prime meridian, or zero longitude, as passing through Avanti, the ancient name for the historic city of Ujjain in Central India, and Rohitaka, the ancient name for Rohtak (28°54′N 76°38′E / 28.900°N 76.633°E / 28.900; 76.633 (Rohitaka (Rohtak))), a city 750 km (470 mi) north.[better source needed] Ptolemy's Geographia was first printed with maps at Bologna in 1477, and many early globes in the 16th century followed his lead, but there was still a hope that a "natural" basis for a prime meridian existed. In 1493, Christopher Columbus reported that the compass pointed due north somewhere in mid-Atlantic, and this fact was used in the important Treaty of Tordesillas of 1494, which settled the territorial dispute between Spain and Portugal over newly discovered lands. The Tordesillas line was eventually settled at 370 leagues (about 2,190 kilometres (1,360 miles; 1,180 nautical miles)) west of Cape Verde.[a] This is shown in the copies of Spain's Padron Real made by Diogo Ribeiro in 1527 and 1529. São Miguel Island (25°30′W) in the Azores was still used for the same reason as late as 1594 by Christopher Saxton, although by then it had been shown that the zero magnetic declination line did not follow a line of longitude. In 1541, Mercator produced his 41 cm terrestrial globe and drew his prime meridian precisely through Fuerteventura (14°1′W) in the Canaries. His later maps used the Azores, following the magnetic hypothesis, but by the time that Ortelius produced the first modern atlas in 1570, other islands such as Cape Verde were coming into use. In his atlas longitudes were counted from 0° to 360°, not 180°W to 180°E as is usual today. This practice was followed by navigators well into the 18th century. In 1634, Cardinal Richelieu used the westernmost island of the Canaries, El Hierro, 19°55′ west of Paris, as the choice of meridian. The geographer Delisle decided to round this off to 20°, so that it simply became the meridian of Paris disguised. In the early 18th century, the battle was on to improve the determination of longitude at sea, leading to the development of the marine chronometer by John Harrison. The development of accurate star charts, principally by the first British Astronomer Royal, John Flamsteed between 1680 and 1719 and disseminated by his successor Edmund Halley, enabled navigators to use the lunar method of determining longitude more accurately using the octant developed by Thomas Godfrey and John Hadley. In the 18th century most countries in Europe adapted their own prime meridian, usually through their capital, hence in France the Paris meridian was prime, in Prussia it was the Berlin meridian, in Denmark the Copenhagen meridian, and in United Kingdom the Greenwich meridian. Between 1765 and 1811, Nevil Maskelyne published 49 issues of the Nautical Almanac based on the meridian of the Royal Observatory, Greenwich. "Maskelyne's tables not only made the lunar method practicable, they also made the Greenwich meridian the universal reference point. Even the French translations of the Nautical Almanac retained Maskelyne's calculations from Greenwich – in spite of the fact that every other table in the Connaissance des Temps considered the Paris meridian as the prime." In 1884, at the International Meridian Conference in Washington, D.C., 22 countries voted to adopt the Greenwich meridian as the prime meridian of the world. The French argued for a neutral line, mentioning the Azores and the Bering Strait, but eventually abstained and continued to use the Paris meridian until 1911. The current international standard Prime Meridian is the IERS Reference Meridian. The International Hydrographic Organization adopted an early version of the IRM in 1983 for all nautical charts. It was adopted for air navigation by the International Civil Aviation Organization on 3 March 1989. International prime meridian Since 1984, the international standard for the Earth's prime meridian is the IERS Reference Meridian. Between 1884 and 1984, the meridian of Greenwich was the world standard. These meridians are very close to each other. In October 1884 the Greenwich Meridian was selected by delegates (forty-one delegates representing twenty-five nations) to the International Meridian Conference held in Washington, D.C., United States to be the common zero of longitude and standard of time reckoning throughout the world.[b] The position of the historic prime meridian, based at the Royal Observatory, Greenwich, was established by Sir George Airy in 1851. It was defined by the location of the Airy Transit Circle ever since the first observation he took with it. Prior to that, it was defined by a succession of earlier transit instruments, the first of which was acquired by the second Astronomer Royal, Edmond Halley in 1721. It was set up in the extreme north-west corner of the Observatory between Flamsteed House and the Western Summer House. This spot, now subsumed into Flamsteed House, is roughly 43 metres (47 yards) to the west of the Airy Transit Circle, a distance equivalent to roughly 2 seconds of longitude. It was Airy's transit circle that was adopted in principle (with French delegates, who pressed for adoption of the Paris meridian abstaining) as the Prime Meridian of the world at the 1884 International Meridian Conference. All of these Greenwich meridians were located via an astronomic observation from the surface of the Earth, oriented via a plumb line along the direction of gravity at the surface. This astronomic Greenwich meridian was disseminated around the world, first via the lunar distance method, then by chronometers carried on ships, then via telegraph lines carried by submarine communications cables, then via radio time signals. One remote longitude ultimately based on the Greenwich meridian using these methods was that of the North American Datum 1927 or NAD27, an ellipsoid whose surface best matches mean sea level under the United States. Beginning in 1973 the International Time Bureau and later the International Earth Rotation and Reference Systems Service changed from reliance on optical instruments like the Airy Transit Circle to techniques such as lunar laser ranging, satellite laser ranging, and very-long-baseline interferometry. The new techniques resulted in the IERS Reference Meridian, the plane of which passes through the centre of mass of the Earth. This differs from the plane established by the Airy transit, which is affected by vertical deflection (the local vertical is affected by influences such as nearby mountains). The change from relying on the local vertical to using a meridian based on the centre of the Earth caused the modern prime meridian to be 5.3″ east of the astronomic Greenwich prime meridian through the Airy Transit Circle. At the latitude of Greenwich, this amounts to 102 metres (112 yards). This was officially accepted by the Bureau International de l'Heure (BIH) in 1984 via its BTS84 (BIH Terrestrial System) that later became WGS84 (World Geodetic System 1984) and the various International Terrestrial Reference Frames (ITRFs). Due to the movement of Earth's tectonic plates, the line of 0° longitude along the surface of the Earth has slowly moved toward the west from this shifted position by a few centimetres; that is, towards the Airy Transit Circle (or the Airy Transit Circle has moved toward the east, depending on your point of view) since 1984 (or the 1960s). With the introduction of satellite technology, it became possible to create a more accurate and detailed global map. With these advances there also arose the necessity to define a reference meridian that, whilst being derived from the Airy Transit Circle, would also take into account the effects of plate movement and variations in the way that the Earth was spinning. As a result, the IERS Reference Meridian was established and is commonly used to denote the Earth's prime meridian (0° longitude) by the International Earth Rotation and Reference Systems Service, which defines and maintains the link between longitude and time. Based on observations to satellites and celestial compact radio sources (quasars) from various coordinated stations around the globe, Airy's transit circle drifts northeast about 2.5 centimetres (1 inch) per year relative to this Earth-centred 0° longitude. It is also the reference meridian of the Global Positioning System operated by the United States Department of Defense, and of WGS84 and its two formal versions, the ideal International Terrestrial Reference System (ITRS) and its realization, the International Terrestrial Reference Frame (ITRF).[c] A current convention on the Earth uses the line of longitude 180° opposite the IRM as the basis for the International Date Line. Download coordinates as: On Earth, starting at the North Pole and heading south to the South Pole, the IERS Reference Meridian (as of 2016) passes through 8 countries, 4 seas, 3 oceans and 1 channel: Prime meridian on other celestial bodies As on the Earth, prime meridians must be arbitrarily defined. Often a landmark such as a crater is used; other times a prime meridian is defined by reference to another celestial object, or by magnetic fields. The prime meridians of the following planetographic systems have been defined: List of historic prime meridians on Earth See also Notes References Works cited External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Light_music] | [TOKENS: 2646]
Contents Light music Light music is a less-serious form of Western classical music, which originated in the 18th and 19th centuries and continues today. Its heyday was in the mid‑20th century. The style is through-composed, usually shorter orchestral pieces and suites designed to appeal to a wider context and audience than more sophisticated forms such as the concerto, the symphony and the opera. Light music was especially popular during the formative years of radio broadcasting, with stations such as the BBC Light Programme (1945–1967) playing almost exclusively "light" compositions. Occasionally also known as mood music and concert music, light music is often grouped with the easy listening genre. Light music was popular in the United Kingdom, the United States and in continental Europe, and many compositions in the genre remain familiar through their use as themes in film, radio and television series. Origins Before Late Romantic orchestral trends of length and scope separated the trajectory of lighter orchestral works from the Western Classical canon, classical composers such as Mozart and Haydn won as much fame for writing lighter pieces such as Eine Kleine Nachtmusik as for their symphonies and operas. Later examples of early European light music include the operettas of composers such as Franz von Suppé or Sir Arthur Sullivan; the Continental salon and parlour music genres; and the waltzes and marches of Johann Strauss II and his family. The Straussian waltz became a common light music composition (note for example Charles Ancliffe's "Nights of Gladness" or Felix Godin's "Valse Septembre"). These influenced the foundation of a "lighter" tradition of classical music in the 19th and early 20th centuries. In the UK, the light-music genre has its origin in the seaside and theatrical orchestras that flourished in Britain during the 19th and early 20th century. These played a wide repertoire of music, from classical music to arrangements of popular songs and ballads of the time. From this tradition came many specially written shorter orchestral pieces designed to appeal to a wider audience. Composers such as Sir Edward Elgar wrote a number of popular works in this medium, such as the "Salut d'Amour", the Nursery Suite, and Chanson de Matin. The conductor Sir Thomas Beecham became famous for concluding his otherwise serious orchestral concerts with what he termed "lollipops", meaning less serious, short or amusing works chosen as a crowd-pleasing encore. Influenced by the earlier "promenade concerts" held in London pleasure gardens, a similar spirit embued many of Henry Wood's early Queen's Hall Proms concerts, especially the "Last Night of the Proms". With the introduction of radio broadcasting by the BBC in the 1920s the style found an ideal outlet. This increased after the launch of the BBC Light Programme in 1945, featuring programmes such as Friday Night is Music Night and Music While You Work. In the United States, "pops orchestra" such as the famous Boston Pops Orchestra began to emerge in the 19th century. The Boston Pops was founded in 1885 as a second, popular identity of the Boston Symphony Orchestra (BSO), founded four years earlier. They commissioned light pieces by composers such as Leroy Anderson, Ferde Grofé, and George Gershwin to write original works, along with theatre music, film music and arrangements of popular music and show tunes. Style The British light music composer Ernest Tomlinson stated that the main distinction of light music is its emphasis on melody. This is certainly a major feature of the genre, although the creation of distinctive musical textures in scoring is another aim, for example the close harmony of Robert Farnon or Ronald Binge's "cascading string" effect, which later became associated with the "sustained hum of Mantovani's reverberated violins". Lyndon Jenkins describes the genre as "original orchestral pieces, often descriptive but in many cases simply three or four minutes of music with an arresting main theme and a contrasting middle section." David Ades suggests that "it is generally agreed that it occupies a position between classical and popular music, yet its boundaries are often blurred". He goes on to cite broadcaster Denis Norden who said that light music was "not just tuneful round the outside, but tuneful right through." Often, the pieces represent a mood, place, person or object, for example Farnon's "Portrait of a Flirt", Albert Ketèlbey's In a Monastery Garden or Edward White's "Runaway Rocking Horse". The genre's other popular title "mood music" is a reference to pieces such as Charles Williams' A Quiet Stroll, which is written at an andante pace and has a jaunty, cheery feel. Light music pieces are usually presented individually or as movements within a suite, and are often given individual descriptive titles. These titles can sometimes be unusual or idiosyncratic, such as Frederic Curzon's "Dance of the Ostracised Imp". In keeping with this tradition of levity, pieces can also feature musical jokes at the expense of more "serious" works, such as Eric Fenby's overture Rossini on Ilkla Moor or Arthur Wilkinson's Beatlecracker Suite, which arranges songs by The Beatles in the style of Tchaikovsky's ballet The Nutcracker. The genre is often associated with the easy listening orchestral arrangements of Mantovani, Percy Faith and Henry Mancini, although with the exception of Mancini these composers are better known for their arrangements rather than through-composed original compositions. As a result of this association, the music is sometimes linked to the genres of lounge music or Exotica, but light music generally does not feature vocals, synthesisers or popular music instruments. It can also sometimes be grouped with the background music, beautiful music and elevator music created for commercial background music players such as the Seeburg 1000 by Seeburg Corporation or Cantata 700 (3M) as well as the works of Muzak Orchestra (Muzak as a company): back in the 1950s, the 1960s and the 1970s, the background music were light orchestral arrangements of popular music played in shops, hotels and airlines. In Japan, "light music" is used to translate the Japanese term 軽音楽(Hepburn: keiongaku), also abbreviated "K-On" or "LM" in roman characters, which is a calque of the English term "light music" into Japanese that has been a broad term for non-classical popular music (e.g. jazz, rock) since the NHK began using the term in 1938. As film, radio and television themes In the 1950s and 1960s many light composers wrote Production Library music for use in film, radio and television, and as a result, many light music compositions are familiar as theme music, an example being Trevor Duncan's March from a Little Suite, used by the BBC as the theme to Dr. Finlay's Casebook in the 1960s, or Edward White's "Puffin' Billy" being the theme to both the BBC radio series Children's Favourites and the CBS children's programme Captain Kangaroo. Eric Coates' marches in particular were popular choices as theme music. The "Dambusters March", possibly his most famous work, was used as the title theme to the 1954 film and has become synonymous with the film and the mission itself. Other Coates works used as theme music include "Calling All Workers" for Music While You Work, "Knightsbridge" for In Town Tonight and "Halcyon Days" as the theme to The Forsyte Saga. Coates was also commissioned to write original marches for television stations including the "BBC Television March", ATV's "Sound and Vision March" and Associated Rediffusion's "Music Everywhere". Other noteworthy television startup themes include William Walton's Granada Preludes, Call Signs and End Music for Granada Television, Robert Farnon's Derby Day for Radiotelevisão Portuguesa, Richard Addinsell's Southern Rhapsody for Southern Television, Ron Goodwin's Westward Ho! for Westward Television and John Dankworth's Widespread World for Rediffusion London. Music for television test card transmissions was also a significant outlet for light music in the UK, from the mid-1950s into the 1980s. Several pieces of light music are used as themes on BBC Radio 4 to the present day, with Eric Coates's "By the Sleepy Lagoon" being the theme of Desert Island Discs, Arthur Wood's "Barwick Green" the theme of The Archers and Ronald Binge's "Sailing By" preceding the late-night shipping forecast. Decline and resurgence During the 1960s, the style began to fall out of fashion on radio and television, forcing many light composers to refocus their energy on writing more serious works or music for film. Robert Farnon completed several symphonies in the later part of his life, as well as composing for television, for example Colditz. The light composers' skills of classical orchestration and arrangement were appreciated by composers such as John Williams, with both Angela Morley and Gordon Langford asked to help orchestrate his film scores for Star Wars (1977) and E.T. the Extra-Terrestrial (1982) amongst others. Many orchestras specialising in playing light music were disbanded. Small palm court orchestras, once common in hotels, seaside resorts and theatres were gradually lost in favour of recorded music. The BBC began to discard its archive of light music, much of which was saved by composer Ernest Tomlinson and is now kept at his Library of Light Orchestral Music. However, the genre was kept in the public consciousness by its use in advertisements and television programmes, often used as a nostalgic evocation of the 1940s and 1950s. During the 1990s, the genre began to be re-discovered and original remastered recordings by orchestras such as the Queen's Hall Light Orchestra were issued on compact disc for the first time. This was followed by new recordings of light music by orchestras such as the Royal Ballet Sinfonia, the New London Orchestra and the BBC Concert Orchestra, as well as continued public concerts by orchestras such as the Cambridge Concert Orchestra, the Scarborough Spa Orchestra and Vancouver Island's Palm Court Light Orchestra. The style also found a new home on BBC Radio 3 on Brian Kay's Light Programme, although this programme was discontinued in February 2007. In 2007, BBC Four broadcast an evening of light music as part of a themed evening celebrating British culture between 1945 and 1955, which included Brian Kay's documentary Music for Everybody and a televised version of Friday Night is Music Night. In the UK, US and Canada, light music can still be heard on some of the radio channels that specialise in classical music: for example Classic FM in the UK, and XLNC1 in Mexico. A nationwide participatory festival of light music called "Light Fantastic" was organised by BBC Radio 3 in June 2011 as part of the 60th anniversary celebrations of the 1951 Festival of Britain. This included events in London, Manchester, Cardiff and Glasgow, from both professional and amateur ensembles, including a live revival of Music While You Work from a factory in Irlam near Manchester, several light music concerts from the Southbank Centre and a number of documentaries about the genre. Light music is also frequently used as incidental music in radio and television programmes, for example Charles Williams' "Devil's Galop" (once famous as the theme to Dick Barton: Special Agent) is now often used in spoofs of 1950s action programmes, such as Mitchell and Webb's The Surprising Adventures of Sir Digby Chicken-Caesar sketches. Mitchell and Webb also use Acker Bilk's "Stranger on the Shore" as the theme music of their radio sketch show. Notable composers There are hundreds of composers who can be considered to have written "light music", although composers who overall focussed primarily on lighter works include Charles Ancliffe, Ronald Binge, Eric Coates, Frederic Curzon, Trevor Duncan, Robert Farnon, Adalgiso Ferraris, Ron Goodwin, Heinz Kiessling, Albert Ketèlbey, Billy Mayerl, Angela Morley, King Palmer, Ernest Tomlinson, Sidney Torch, Edward White, Charles Williams, Alberto Semprini and Haydn Wood. Each of these composers worked during the "golden age" of light music from roughly 1920–1960. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/OpenAI#cite_note-149] | [TOKENS: 8773]
Contents OpenAI OpenAI is an American artificial intelligence research organization comprising both a non-profit foundation and a controlled for-profit public benefit corporation (PBC), headquartered in San Francisco. It aims to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". OpenAI is widely recognized for its development of the GPT family of large language models, the DALL-E series of text-to-image models, and the Sora series of text-to-video models, which have influenced industry research and commercial applications. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI. The organization was founded in 2015 in Delaware but evolved a complex corporate structure. As of October 2025, following restructuring approved by California and Delaware regulators, the non-profit OpenAI Foundation holds 26% of the for-profit OpenAI Group PBC, with Microsoft holding 27% and employees/other investors holding 47%. Under its governance arrangements, the OpenAI Foundation holds the authority to appoint the board of the for-profit OpenAI Group PBC, a mechanism designed to align the entity’s strategic direction with the Foundation’s charter. Microsoft previously invested over $13 billion into OpenAI, and provides Azure cloud computing resources. In October 2025, OpenAI conducted a $6.6 billion share sale that valued the company at $500 billion. In 2023 and 2024, OpenAI faced multiple lawsuits for alleged copyright infringement against authors and media companies whose work was used to train some of OpenAI's products. In November 2023, OpenAI's board removed Sam Altman as CEO, citing a lack of confidence in him, but reinstated him five days later following a reconstruction of the board. Throughout 2024, roughly half of then-employed AI safety researchers left OpenAI, citing the company's prominent role in an industry-wide problem. Founding In December 2015, OpenAI was founded as a not for profit organization by Sam Altman, Elon Musk, Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk as the co-chairs. A total of $1 billion in capital was pledged by Sam Altman, Greg Brockman, Elon Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), and Infosys. However, the actual capital collected significantly lagged pledges. According to company disclosures, only $130 million had been received by 2019. In its founding charter, OpenAI stated an intention to collaborate openly with other institutions by making certain patents and research publicly available, but later restricted access to its most capable models, citing competitive and safety concerns. OpenAI was initially run from Brockman's living room. It was later headquartered at the Pioneer Building in the Mission District, San Francisco. According to OpenAI's charter, its founding mission is "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity." Musk and Altman stated in 2015 that they were partly motivated by concerns about AI safety and existential risk from artificial general intelligence. OpenAI stated that "it's hard to fathom how much human-level AI could benefit society", and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly". The startup also wrote that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible", and that "because of AI's surprising history, it's hard to predict when human-level AI might come within reach. When it does, it'll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest." Co-chair Sam Altman expected a decades-long project that eventually surpasses human intelligence. Brockman met with Yoshua Bengio, one of the "founding fathers" of deep learning, and drew up a list of great AI researchers. Brockman was able to hire nine of them as the first employees in December 2015. OpenAI did not pay AI researchers salaries comparable to those of Facebook or Google. It also did not pay stock options which AI researchers typically get. Nevertheless, OpenAI spent $7 million on its first 52 employees in 2016. OpenAI's potential and mission drew these researchers to the firm; a Google employee said he was willing to leave Google for OpenAI "partly because of the very strong group of people and, to a very large extent, because of its mission." OpenAI co-founder Wojciech Zaremba stated that he turned down "borderline crazy" offers of two to three times his market value to join OpenAI instead. In April 2016, OpenAI released a public beta of "OpenAI Gym", its platform for reinforcement learning research. Nvidia gifted its first DGX-1 supercomputer to OpenAI in August 2016 to help it train larger and more complex AI models with the capability of reducing processing time from six days to two hours. In December 2016, OpenAI released "Universe", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites, and other applications. Corporate structure In 2019, OpenAI transitioned from non-profit to "capped" for-profit, with the profit being capped at 100 times any investment. According to OpenAI, the capped-profit model allows OpenAI Global, LLC to legally attract investment from venture funds and, in addition, to grant employees stakes in the company. Many top researchers work for Google Brain, DeepMind, or Facebook, which offer equity that a nonprofit would be unable to match. Before the transition, OpenAI was legally required to publicly disclose the compensation of its top employees. The company then distributed equity to its employees and partnered with Microsoft, announcing an investment package of $1 billion into the company. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft. OpenAI Global, LLC then announced its intention to commercially license its technologies. It planned to spend $1 billion "within five years, and possibly much faster". Altman stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need "more capital than any non-profit has ever raised" to achieve artificial general intelligence. The nonprofit, OpenAI, Inc., is the sole controlling shareholder of OpenAI Global, LLC, which, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI, Inc.'s nonprofit charter. A majority of OpenAI, Inc.'s board is barred from having financial stakes in OpenAI Global, LLC. In addition, minority members with a stake in OpenAI Global, LLC are barred from certain votes due to conflict of interest. Some researchers have argued that OpenAI Global, LLC's switch to for-profit status is inconsistent with OpenAI's claims to be "democratizing" AI. On February 29, 2024, Elon Musk filed a lawsuit against OpenAI and CEO Sam Altman, accusing them of shifting focus from public benefit to profit maximization—a case OpenAI dismissed as "incoherent" and "frivolous," though Musk later revived legal action against Altman and others in August. On April 9, 2024, OpenAI countersued Musk in federal court, alleging that he had engaged in "bad-faith tactics" to slow the company's progress and seize its innovations for his personal benefit. OpenAI also argued that Musk had previously supported the creation of a for-profit structure and had expressed interest in controlling OpenAI himself. The countersuit seeks damages and legal measures to prevent further alleged interference. On February 10, 2025, a consortium of investors led by Elon Musk submitted a $97.4 billion unsolicited bid to buy the nonprofit that controls OpenAI, declaring willingness to match or exceed any better offer. The offer was rejected on 14 February 2025, with OpenAI stating that it was not for sale, but the offer complicated Altman's restructuring plan by suggesting a lower bar for how much the nonprofit should be valued. OpenAI, Inc. was originally designed as a nonprofit in order to ensure that AGI "benefits all of humanity" rather than "the private gain of any person". In 2019, it created OpenAI Global, LLC, a capped-profit subsidiary controlled by the nonprofit. In December 2024, OpenAI proposed a restructuring plan to convert the capped-profit into a Delaware-based public benefit corporation (PBC), and to release it from the control of the nonprofit. The nonprofit would sell its control and other assets, getting equity in return, and would use it to fund and pursue separate charitable projects, including in science and education. OpenAI's leadership described the change as necessary to secure additional investments, and claimed that the nonprofit's founding mission to ensure AGI "benefits all of humanity" would be better fulfilled. The plan has been criticized by former employees. A legal letter named "Not For Private Gain" asked the attorneys general of California and Delaware to intervene, stating that the restructuring is illegal and would remove governance safeguards from the nonprofit and the attorneys general. The letter argues that OpenAI's complex structure was deliberately designed to remain accountable to its mission, without the conflicting pressure of maximizing profits. It contends that the nonprofit is best positioned to advance its mission of ensuring AGI benefits all of humanity by continuing to control OpenAI Global, LLC, whatever the amount of equity that it could get in exchange. PBCs can choose how they balance their mission with profit-making. Controlling shareholders have a large influence on how closely a PBC sticks to its mission. On October 28, 2025, OpenAI announced that it had adopted the new PBC corporate structure after receiving approval from the attorneys general of California and Delaware. Under the new structure, OpenAI's for-profit branch became a public benefit corporation known as OpenAI Group PBC, while the non-profit was renamed to the OpenAI Foundation. The OpenAI Foundation holds a 26% stake in the PBC, while Microsoft holds a 27% stake and the remaining 47% is owned by employees and other investors. All members of the OpenAI Group PBC board of directors will be appointed by the OpenAI Foundation, which can remove them at any time. Members of the Foundation's board will also serve on the for-profit board. The new structure allows the for-profit PBC to raise investor funds like most traditional tech companies, including through an initial public offering, which Altman claimed was the most likely path forward. In January 2023, OpenAI Global, LLC was in talks for funding that would value the company at $29 billion, double its 2021 value. On January 23, 2023, Microsoft announced a new US$10 billion investment in OpenAI Global, LLC over multiple years, partially needed to use Microsoft's cloud-computing service Azure. From September to December, 2023, Microsoft rebranded all variants of its Copilot to Microsoft Copilot, and they added MS-Copilot to many installations of Windows and released Microsoft Copilot mobile apps. Following OpenAI's 2025 restructuring, Microsoft owns a 27% stake in the for-profit OpenAI Group PBC, valued at $135 billion. In a deal announced the same day, OpenAI agreed to purchase $250 billion of Azure services, with Microsoft ceding their right of first refusal over OpenAI's future cloud computing purchases. As part of the deal, OpenAI will continue to share 20% of its revenue with Microsoft until it achieves AGI, which must now be verified by an independent panel of experts. The deal also loosened restrictions on both companies working with third parties, allowing Microsoft to pursue AGI independently and allowing OpenAI to develop products with other companies. In 2017, OpenAI spent $7.9 million, a quarter of its functional expenses, on cloud computing alone. In comparison, DeepMind's total expenses in 2017 were $442 million. In the summer of 2018, training OpenAI's Dota 2 bots required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks. In October 2024, OpenAI completed a $6.6 billion capital raise with a $157 billion valuation including investments from Microsoft, Nvidia, and SoftBank. On January 21, 2025, Donald Trump announced The Stargate Project, a joint venture between OpenAI, Oracle, SoftBank and MGX to build an AI infrastructure system in conjunction with the US government. The project takes its name from OpenAI's existing "Stargate" supercomputer project and is estimated to cost $500 billion. The partners planned to fund the project over the next four years. In July, the United States Department of Defense announced that OpenAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and xAI. In the same month, the company made a deal with the UK Government to use ChatGPT and other AI tools in public services. OpenAI subsequently began a $50 million fund to support nonprofit and community organizations. In April 2025, OpenAI raised $40 billion at a $300 billion post-money valuation, which was the highest-value private technology deal in history. The financing round was led by SoftBank, with other participants including Microsoft, Coatue, Altimeter and Thrive. In July 2025, the company reported annualized revenue of $12 billion. This was an increase from $3.7 billion in 2024, which was driven by ChatGPT subscriptions, which reached 20 million paid subscribers by April 2025, up from 15.5 million at the end of 2024, alongside a rapidly expanding enterprise customer base that grew to five million business users. The company’s cash burn remains high because of the intensive computational costs required to train and operate large language models. It projects an $8 billion operating loss in 2025. OpenAI reports revised long-term spending projections totaling approximately $115 billion through 2029, with annual expenditures projected to escalate significantly, reaching $17 billion in 2026, $35 billion in 2027, and $45 billion in 2028. These expenditures are primarily allocated toward expanding compute infrastructure, developing proprietary AI chips, constructing data centers, and funding intensive model training programs, with more than half of the spending through the end of the decade expected to support research-intensive compute for model training and development. The company's financial strategy prioritizes market expansion and technological advancement over near-term profitability, with OpenAI targeting cash-flow-positive operations by 2029 and projecting revenue of approximately $200 billion by 2030. This aggressive spending trajectory underscores both the enormous capital requirements of scaling cutting-edge AI technology and OpenAI's commitment to maintaining its position as a leader in the artificial intelligence industry. In October 2025, OpenAI completed an employee share sale of up to $10 billion to existing investors which valued the company at $500 billion. The deal values OpenAI as the most valuable privately owned company in the world—surpassing SpaceX as the world's most valuable private company. On November 17, 2023, Sam Altman was removed as CEO when its board of directors (composed of Helen Toner, Ilya Sutskever, Adam D'Angelo and Tasha McCauley) cited a lack of confidence in him. Chief Technology Officer Mira Murati took over as interim CEO. Greg Brockman, the president of OpenAI, was also removed as chairman of the board and resigned from the company's presidency shortly thereafter. Three senior OpenAI researchers subsequently resigned: director of research and GPT-4 lead Jakub Pachocki, head of AI risk Aleksander Mądry, and researcher Szymon Sidor. On November 18, 2023, there were reportedly talks of Altman returning as CEO amid pressure placed upon the board by investors such as Microsoft and Thrive Capital, who objected to Altman's departure. Although Altman himself spoke in favor of returning to OpenAI, he has since stated that he considered starting a new company and bringing former OpenAI employees with him if talks to reinstate him didn't work out. The board members agreed "in principle" to resign if Altman returned. On November 19, 2023, negotiations with Altman to return failed and Murati was replaced by Emmett Shear as interim CEO. The board initially contacted Anthropic CEO Dario Amodei (a former OpenAI executive) about replacing Altman, and proposed a merger of the two companies, but both offers were declined. On November 20, 2023, Microsoft CEO Satya Nadella announced Altman and Brockman would be joining Microsoft to lead a new advanced AI research team, but added that they were still committed to OpenAI despite recent events. Before the partnership with Microsoft was finalized, Altman gave the board another opportunity to negotiate with him. About 738 of OpenAI's 770 employees, including Murati and Sutskever, signed an open letter stating they would quit their jobs and join Microsoft if the board did not rehire Altman and then resign. This prompted OpenAI investors to consider legal action against the board as well. In response, OpenAI management sent an internal memo to employees stating that negotiations with Altman and the board had resumed and would take some time. On November 21, 2023, after continued negotiations, Altman and Brockman returned to the company in their prior roles along with a reconstructed board made up of new members Bret Taylor (as chairman) and Lawrence Summers, with D'Angelo remaining. According to subsequent reporting, shortly before Altman’s firing, some employees raised concerns to the board about how he had handled the safety implications of a recent internal AI capability discovery. On November 29, 2023, OpenAI announced that an anonymous Microsoft employee had joined the board as a non-voting member to observe the company's operations; Microsoft resigned from the board in July 2024. In February 2024, the Securities and Exchange Commission subpoenaed OpenAI's internal communication to determine if Altman's alleged lack of candor misled investors. In 2024, following the temporary removal of Sam Altman and his return, many employees gradually left OpenAI, including most of the original leadership team and a significant number of AI safety researchers. In August 2023, it was announced that OpenAI had acquired the New York-based start-up Global Illumination, a company that deploys AI to develop digital infrastructure and creative tools. In June 2024, OpenAI acquired Multi, a startup focused on remote collaboration. In March 2025, OpenAI reached a deal with CoreWeave to acquire $350 million worth of CoreWeave shares and access to AI infrastructure, in return for $11.9 billion paid over five years. Microsoft was already CoreWeave's biggest customer in 2024. Alongside their other business dealings, OpenAI and Microsoft were renegotiating the terms of their partnership to facilitate a potential future initial public offering by OpenAI, while ensuring Microsoft's continued access to advanced AI models. On May 21, OpenAI announced the $6.5 billion acquisition of io, an AI hardware start-up founded by former Apple designer Jony Ive in 2024. In September 2025, OpenAI agreed to acquire the product testing startup Statsig for $1.1 billion in an all-stock deal and appointed Statsig's founding CEO Vijaye Raji as OpenAI's chief technology officer of applications. The company also announced development of an AI-driven hiring service designed to rival LinkedIn. OpenAI acquired personal finance app Roi in October 2025. In October 2025, OpenAI acquired Software Applications Incorporated, the developer of Sky, a macOS-based natural language interface designed to operate across desktop applications. The Sky team joined OpenAI, and the company announced plans to integrate Sky’s capabilities into ChatGPT. In December 2025, it was announced OpenAI had agreed to acquire Neptune, an AI tooling startup that helps companies track and manage model training, for an undisclosed amount. In January 2026, it was announced OpenAI had acquired healthcare technology startup Torch for approximately $60 million. The acquisition followed the launch of OpenAI’s ChatGPT Health product and was intended to strengthen the company’s medical data and healthcare artificial intelligence capabilities. OpenAI has been criticized for outsourcing the annotation of data sets to Sama, a company based in San Francisco that employed workers in Kenya. These annotations were used to train an AI model to detect toxicity, which could then be used to moderate toxic content, notably from ChatGPT's training data and outputs. However, these pieces of text usually contained detailed descriptions of various types of violence, including sexual violence. The investigation uncovered that OpenAI began sending snippets of data to Sama as early as November 2021. The four Sama employees interviewed by Time described themselves as mentally scarred. OpenAI paid Sama $12.50 per hour of work, and Sama was redistributing the equivalent of between $1.32 and $2.00 per hour post-tax to its annotators. Sama's spokesperson said that the $12.50 was also covering other implicit costs, among which were infrastructure expenses, quality assurance and management. In 2024, OpenAI began collaborating with Broadcom to design a custom AI chip capable of both training and inference, targeted for mass production in 2026 and to be manufactured by TSMC on a 3 nm process node. This initiative intended to reduce OpenAI's dependence on Nvidia GPUs, which are costly and face high demand in the market. In January 2024, Arizona State University purchased ChatGPT Enterprise in OpenAI's first deal with a university. In June 2024, Apple Inc. signed a contract with OpenAI to integrate ChatGPT features into its products as part of its new Apple Intelligence initiative. In June 2025, OpenAI began renting Google Cloud's Tensor Processing Units (TPUs) to support ChatGPT and related services, marking its first meaningful use of non‑Nvidia AI chips. In September 2025, it was revealed that OpenAI signed a contract with Oracle to purchase $300 billion in computing power over the next five years. In September 2025, OpenAI and NVIDIA announced a memorandum of understanding that included a potential deployment of at least 10 gigawatts of NVIDIA systems and a $100 billion investment from NVIDIA in OpenAI. OpenAI expected the negotiations to be completed within weeks. As of January 2026, this has not been realized, and the two sides are rethinking the future of their partnership. In October 2025, OpenAI announced a multi-billion dollar deal with AMD. OpenAI committed to purchasing six gigawatts worth of AMD chips, starting with the MI450. OpenAI will have the option to buy up to 160 million shares of AMD, about 10% of the company, depending on development, performance and share price targets. In December 2025, Disney said it would make a $1 billion investment in OpenAI, and signed a three-year licensing deal that will let users generate videos using Sora—OpenAI's short-form AI video platform. More than 200 Disney, Marvel, Star Wars and Pixar characters will be available to OpenAI users. In early 2026, Amazon entered advanced discussions to invest up to $50 billion in OpenAI as part of a potential artificial intelligence partnership. Under the proposed agreement, OpenAI’s models could be integrated into Amazon’s digital assistant Alexa and other internal projects. OpenAI provides LLMs to the Artificial Intelligence Cyber Challenge and to the Advanced Research Projects Agency for Health. In October 2024, The Intercept revealed that OpenAI's tools are considered "essential" for AFRICOM's mission and included in an "Exception to Fair Opportunity" contractual agreement between the United States Department of Defense and Microsoft. In December 2024, OpenAI said it would partner with defense-tech company Anduril to build drone defense technologies for the United States and its allies. In 2025, OpenAI's Chief Product Officer, Kevin Weil, was commissioned lieutenant colonel in the U.S. Army to join Detachment 201 as senior advisor. In June 2025, the U.S. Department of Defense awarded OpenAI a $200 million one-year contract to develop AI tools for military and national security applications. OpenAI announced a new program, OpenAI for Government, to give federal, state, and local governments access to its models, including ChatGPT. Services In February 2019, GPT-2 was announced, which gained attention for its ability to generate human-like text. In 2020, OpenAI announced GPT-3, a language model trained on large internet datasets. GPT-3 is aimed at natural language answering questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named the API, would form the heart of its first commercial product. Eleven employees left OpenAI, mostly between December 2020 and January 2021, in order to establish Anthropic. In 2021, OpenAI introduced DALL-E, a specialized deep learning model adept at generating complex digital images from textual descriptions, utilizing a variant of the GPT-3 architecture. In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days. According to anonymous sources cited by Reuters in December 2022, OpenAI Global, LLC was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024. After ChatGPT was launched, Google announced a similar chatbot, Bard, amid internal concerns that ChatGPT could threaten Google’s position as a primary source of online information. On February 7, 2023, Microsoft announced that it was building AI technology based on the same foundation as ChatGPT into Microsoft Bing, Edge, Microsoft 365 and other products. On March 14, 2023, OpenAI released GPT-4, both as an API (with a waitlist) and as a feature of ChatGPT Plus. On November 6, 2023, OpenAI launched GPTs, allowing individuals to create customized versions of ChatGPT for specific purposes, further expanding the possibilities of AI applications across various industries. On November 14, 2023, OpenAI announced they temporarily suspended new sign-ups for ChatGPT Plus due to high demand. Access for newer subscribers re-opened a month later on December 13. In December 2024, the company launched the Sora model. It also launched OpenAI o1, an early reasoning model that was internally codenamed strawberry. Additionally, ChatGPT Pro—a $200/month subscription service offering unlimited o1 access and enhanced voice features—was introduced, and preliminary benchmark results for the upcoming OpenAI o3 models were shared. On January 23, 2025, OpenAI released Operator, an AI agent and web automation tool for accessing websites to execute goals defined by users. The feature was only available to Pro users in the United States. OpenAI released deep research agent, nine days later. It scored a 27% accuracy on the benchmark Humanity's Last Exam (HLE). Altman later stated GPT-4.5 would be the last model without full chain-of-thought reasoning. In July 2025, reports indicated that AI models by both OpenAI and Google DeepMind solved mathematics problems at the level of top-performing students in the International Mathematical Olympiad. OpenAI's large language model was able to achieve gold medal-level performance, reflecting significant progress in AI's reasoning abilities. On October 6, 2025, OpenAI unveiled its Agent Builder platform during the company's DevDay event. The platform includes a visual drag-and-drop interface that lets developers and businesses design, test, and deploy agentic workflows with limited coding. On October 21, 2025, OpenAI introduced ChatGPT Atlas, a browser integrating the ChatGPT assistant directly into web navigation, to compete with existing browsers such as Google Chrome and Apple Safari. On December 11, 2025, OpenAI announced GPT-5.2. This model will be better at creating spreadsheets, building presentations, perceiving images, writing code and understanding long context. On January 27, 2026, OpenAI introduced Prism, a LaTeX-native workspace meant to assist scientists to help with research and writing. The platform utilizes GPT-5.2 as a backend to automate the process of drafting for scientific papers, including features for managing citations, complex equation formatting, and real-time collaborative editing. In March 2023, the company was criticized for disclosing particularly few technical details about products like GPT-4, contradicting its initial commitment to openness and making it harder for independent researchers to replicate its work and develop safeguards. OpenAI cited competitiveness and safety concerns to justify this repudiation. OpenAI's former chief scientist Ilya Sutskever argued in 2023 that open-sourcing increasingly capable models was increasingly risky, and that the safety reasons for not open-sourcing the most potent AI models would become "obvious" in a few years. In September 2025, OpenAI published a study on how people use ChatGPT for everyday tasks. The study found that "non-work tasks" (according to an LLM-based classifier) account for more than 72 percent of all ChatGPT usage, with a minority of overall usage related to business productivity. In July 2023, OpenAI launched the superalignment project, aiming within four years to determine how to align future superintelligent systems. OpenAI promised to dedicate 20% of its computing resources to the project, although the team denied receiving anything close to 20%. OpenAI ended the project in May 2024 after its co-leaders Ilya Sutskever and Jan Leike left the company. In August 2025, OpenAI was criticized after thousands of private ChatGPT conversations were inadvertently exposed to public search engines like Google due to an experimental "share with search engines" feature. The opt-in toggle, intended to allow users to make specific chats discoverable, resulted in some discussions including personal details such as names, locations, and intimate topics appearing in search results when users accidentally enabled it while sharing links. OpenAI announced the feature's permanent removal on August 1, 2025, and the company began coordinating with search providers to remove the exposed content, emphasizing that it was not a security breach but a design flaw that heightened privacy risks. CEO Sam Altman acknowledged the issue in a podcast, noting users often treat ChatGPT as a confidant for deeply personal matters, which amplified concerns about AI handling sensitive data. Management In 2018, Musk resigned from his Board of Directors seat, citing "a potential future conflict [of interest]" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars. OpenAI stated that Musk's financial contributions were below $45 million. On March 3, 2023, Reid Hoffman resigned from his board seat, citing a desire to avoid conflicts of interest with his investments in AI companies via Greylock Partners, and his co-founding of the AI startup Inflection AI. Hoffman remained on the board of Microsoft, a major investor in OpenAI. In May 2024, Chief Scientist Ilya Sutskever resigned and was succeeded by Jakub Pachocki. Co-leader Jan Leike also departed amid concerns over safety and trust. OpenAI then signed deals with Reddit, News Corp, Axios, and Vox Media. Paul Nakasone then joined the board of OpenAI. In August 2024, cofounder John Schulman left OpenAI to join Anthropic, and OpenAI's president Greg Brockman took extended leave until November. In September 2024, CTO Mira Murati left the company. In November 2025, Lawrence Summers resigned from the board of directors. Governance and legal issues In May 2023, Sam Altman, Greg Brockman and Ilya Sutskever posted recommendations for the governance of superintelligence. They stated that superintelligence could happen within the next 10 years, allowing a "dramatically more prosperous future" and that "given the possibility of existential risk, we can't just be reactive". They proposed creating an international watchdog organization similar to IAEA to oversee AI systems above a certain capability threshold, suggesting that relatively weak AI systems on the other side should not be overly regulated. They also called for more technical safety research for superintelligences, and asked for more coordination, for example through governments launching a joint project which "many current efforts become part of". In July 2023, the FTC issued a civil investigative demand to OpenAI to investigate whether the company's data security and privacy practices to develop ChatGPT were unfair or harmed consumers (including by reputational harm) in violation of Section 5 of the Federal Trade Commission Act of 1914. These are typically preliminary investigative matters and are nonpublic, but the FTC's document was leaked. In July 2023, the FTC launched an investigation into OpenAI over allegations that the company scraped public data and published false and defamatory information. They asked OpenAI for comprehensive information about its technology and privacy safeguards, as well as any steps taken to prevent the recurrence of situations in which its chatbot generated false and derogatory content about people. The agency also raised concerns about ‘circular’ spending arrangements—for example, Microsoft extending Azure credits to OpenAI while both companies shared engineering talent—and warned that such structures could negatively affect the public. In September 2024, OpenAI's global affairs chief endorsed the UK's "smart" AI regulation during testimony to a House of Lords committee. In February 2025, OpenAI CEO Sam Altman stated that the company is interested in collaborating with the People's Republic of China, despite regulatory restrictions imposed by the U.S. government. This shift comes in response to the growing influence of the Chinese artificial intelligence company DeepSeek, which has disrupted the AI market with open models, including DeepSeek V3 and DeepSeek R1. Following DeepSeek's market emergence, OpenAI enhanced security protocols to protect proprietary development techniques from industrial espionage. Some industry observers noted similarities between DeepSeek's model distillation approach and OpenAI's methodology, though no formal intellectual property claim was filed. According to Oliver Roberts, in March 2025, the United States had 781 state AI bills or laws. OpenAI advocated for preempting state AI laws with federal laws. According to Scott Kohler, OpenAI has opposed California's AI legislation and suggested that the state bill encroaches on a more competent federal government. Public Citizen opposed a federal preemption on AI and pointed to OpenAI's growth and valuation as evidence that existing state laws have not hampered innovation. Before May 2024, OpenAI required departing employees to sign a lifelong non-disparagement agreement forbidding them from criticizing OpenAI and acknowledging the existence of the agreement. Daniel Kokotajlo, a former employee, publicly stated that he forfeited his vested equity in OpenAI in order to leave without signing the agreement. Sam Altman stated that he was unaware of the equity cancellation provision, and that OpenAI never enforced it to cancel any employee's vested equity. However, leaked documents and emails refute this claim. On May 23, 2024, OpenAI sent a memo releasing former employees from the agreement. OpenAI was sued for copyright infringement by authors Sarah Silverman, Matthew Butterick, Paul Tremblay and Mona Awad in July 2023. In September 2023, 17 authors, including George R. R. Martin, John Grisham, Jodi Picoult and Jonathan Franzen, joined the Authors Guild in filing a class action lawsuit against OpenAI, alleging that the company's technology was illegally using their copyrighted work. The New York Times also sued the company in late December 2023. In May 2024 it was revealed that OpenAI had destroyed its Books1 and Books2 training datasets, which were used in the training of GPT-3, and which the Authors Guild believed to have contained over 100,000 copyrighted books. In 2021, OpenAI developed a speech recognition tool called Whisper. OpenAI used it to transcribe more than one million hours of YouTube videos into text for training GPT-4. The automated transcription of YouTube videos raised concerns within OpenAI employees regarding potential violations of YouTube's terms of service, which prohibit the use of videos for applications independent of the platform, as well as any type of automated access to its videos. Despite these concerns, the project proceeded with notable involvement from OpenAI's president, Greg Brockman. The resulting dataset proved instrumental in training GPT-4. In February 2024, The Intercept as well as Raw Story and Alternate Media Inc. filed lawsuit against OpenAI on copyright litigation ground. The lawsuit is said to have charted a new legal strategy for digital-only publishers to sue OpenAI. On April 30, 2024, eight newspapers filed a lawsuit in the Southern District of New York against OpenAI and Microsoft, claiming illegal harvesting of their copyrighted articles. The suing publications included The Mercury News, The Denver Post, The Orange County Register, St. Paul Pioneer Press, Chicago Tribune, Orlando Sentinel, Sun Sentinel, and New York Daily News. In June 2023, a lawsuit claimed that OpenAI scraped 300 billion words online without consent and without registering as a data broker. It was filed in San Francisco, California, by sixteen anonymous plaintiffs. They also claimed that OpenAI and its partner as well as customer Microsoft continued to unlawfully collect and use personal data from millions of consumers worldwide to train artificial intelligence models. On May 22, 2024, OpenAI entered into an agreement with News Corp to integrate news content from The Wall Street Journal, the New York Post, The Times, and The Sunday Times into its AI platform. Meanwhile, other publications like The New York Times chose to sue OpenAI and Microsoft for copyright infringement over the use of their content to train AI models. In November 2024, a coalition of Canadian news outlets, including the Toronto Star, Metroland Media, Postmedia, The Globe and Mail, The Canadian Press and CBC, sued OpenAI for using their news articles to train its software without permission. In October 2024 during a New York Times interview, Suchir Balaji accused OpenAI of violating copyright law in developing its commercial LLMs which he had helped engineer. He was a likely witness in a major copyright trial against the AI company, and was one of several of its current or former employees named in court filings as potentially having documents relevant to the case. On November 26, 2024, Balaji died by suicide. His death prompted the circulation of conspiracy theories alleging that he had been deliberately silenced. California Congressman Ro Khanna endorsed calls for an investigation. On April 24, 2025, Ziff Davis sued OpenAI in Delaware federal court for copyright infringement. Ziff Davis is known for publications such as ZDNet, PCMag, CNET, IGN and Lifehacker. In April 2023, the EU's European Data Protection Board (EDPB) formed a dedicated task force on ChatGPT "to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities" based on the "enforcement action undertaken by the Italian data protection authority against OpenAI about the ChatGPT service". In late April 2024 NOYB filed a complaint with the Austrian Datenschutzbehörde against OpenAI for violating the European General Data Protection Regulation. A text created with ChatGPT gave a false date of birth for a living person without giving the individual the option to see the personal data used in the process. A request to correct the mistake was denied. Additionally, neither the recipients of ChatGPT's work nor the sources used, could be made available, OpenAI claimed. OpenAI was criticized for lifting its ban on using ChatGPT for "military and warfare". Up until January 10, 2024, its "usage policies" included a ban on "activity that has high risk of physical harm, including", specifically, "weapons development" and "military and warfare". Its new policies prohibit "[using] our service to harm yourself or others" and to "develop or use weapons". In August 2025, the parents of a 16-year-old boy who died by suicide filed a wrongful death lawsuit against OpenAI (and CEO Sam Altman), alleging that months of conversations with ChatGPT about mental health and methods of self-harm contributed to their son's death and that safeguards were inadequate for minors. OpenAI expressed condolences and said it was strengthening protections (including updated crisis response behavior and parental controls). Coverage described it as a first-of-its-kind wrongful death case targeting the company's chatbot. The complaint was filed in California state court in San Francisco. In November 2025, the Social Media Victims Law Center and Tech Justice Law Project filed seven lawsuits against OpenAI, of which four lawsuits alleged wrongful death. The suits were filed on behalf of Zane Shamblin, 23, of Texas; Amaurie Lacey, 17, of Georgia; Joshua Enneking, 26, of Florida; and Joe Ceccanti, 48, of Oregon, who each committed suicide after prolonged ChatGPT usage. In December 2025, Stein-Erik Soelberg, who was 56 years old at the time, allegedly murdered his mother Suzanne Adams. In the months prior the paranoid, delusional man often discussed his ideas with ChatGPT. Adam's estate then sued OpenAI claiming that the company shared responsibility due to the risk of chatbot psychosis despite the fact that chatbot psychosis is not a real medical diagnosis. OpenAI responded saying they will make ChatGPT safer for users disconnected from reality. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Greenwich] | [TOKENS: 7993]
Contents Greenwich Greenwich (/ˈɡrɛnɪtʃ/ ⓘ GREN-itch, /-ɪdʒ/ -⁠ij, /ˈɡrɪn-/ GRIN-) is an affluent area in south-east London, England, within the Royal Borough of Greenwich and the ceremonial county of Greater London, 5.5 miles (8.9 km) east-south-east of Charing Cross. It contains the UNESCO World Heritage Site of Maritime Greenwich. Greenwich is notable for its astronomical and maritime history and for giving its name to the Greenwich Meridian (0° longitude) and Greenwich Mean Time. The town became the site of a royal palace, the Palace of Placentia, from the 15th century and was the birthplace of many Tudors, including Henry VIII and Elizabeth I. The palace fell into disrepair during the English Civil War and was demolished, eventually being replaced by the Royal Naval Hospital for Sailors, designed by Sir Christopher Wren and his assistant Nicholas Hawksmoor. These buildings became the Royal Naval College in 1873, and they remained a military education establishment until 1998, when they passed into the hands of the Greenwich Foundation. The historic rooms within these buildings remain open to the public; other buildings are used by the University of Greenwich and Trinity Laban Conservatoire of Music and Dance. The town became a popular resort in the 18th century, and many grand houses were built there, such as Vanbrugh Castle (1717) on Maze Hill and the Ranger's House (1722) near Blackheath. From the Georgian period estates of houses were constructed above the town centre. The maritime connections of Greenwich were celebrated in the 20th century, with the siting of the historic vessels Cutty Sark and Gipsy Moth IV next to the river front, and the National Maritime Museum in the former buildings of the Royal Hospital School in 1934. Historically an ancient parish in the Blackheath Hundred of Kent, the town formed part of the growing conurbation of Victorian London in the 19th century. When the County of London, an administrative area designed to replace the Metropolitan Board of Works, was formed in 1889, the parish merged with those of Charlton-next-Woolwich, Deptford St Nicholas and Kidbrooke to create the Metropolitan Borough of Greenwich. When local government in London was again reformed in 1965, it merged with most of the Metropolitan Borough of Woolwich, creating what is now the Royal Borough of Greenwich, a local authority district of Greater London. History The place-name 'Greenwich' is first attested in an Anglo-Saxon charter of 918, where it appears as Gronewic. It is recorded as Grenewic in 964, and as Grenawic in the Anglo-Saxon Chronicle for 1013. It is Grenviz in the Domesday Book of 1086, and Grenewych in the Taxatio Ecclesiastica of 1291. The name means 'green wic', indicating that Greenwich was what is known as a -wich town or emporium, from the Latin 'vicus'. The settlement later became known as East Greenwich to distinguish it from West Greenwich or Deptford Strond, the part of Deptford adjacent to the River Thames, but the use of East Greenwich to mean the whole of the town of Greenwich died out in the 19th century. However, Greenwich was divided into the registration subdistricts of Greenwich East and Greenwich West from the beginning of civil registration in 1837, the boundary running down what is now Greenwich Church Street and Croom's Hill, although more modern references to "East" and "West" Greenwich probably refer to the areas east and west of the Royal Naval College and National Maritime Museum corresponding with the West Greenwich council ward. An article in The Times of 13 October 1967 stated: East Greenwich, gateway to the Blackwall Tunnel, remains solidly working class, the manpower for one eighth of London's heavy industry. West Greenwich is a hybrid: the spirit of Nelson, the Cutty Sark, the Maritime Museum, an industrial waterfront and a number of elegant houses, ripe for development. Royal charters granted to English colonists in North America, as well as in Company Bombay and St Helena, often used the name of the manor of East Greenwich for describing the tenure (from the Latin verb teneo, hold) as that of free socage. New England charters provided that the grantees should hold their lands "as of his Majesty's manor of East Greenwich". This was in relation to the principle of land tenure under English law, that the ruling monarch (king or queen) was paramount lord of all the soil in the terra regis, while all others held their lands, directly or indirectly, under the monarch. Land outside the physical boundaries of England, as in America, was treated as belonging constructively to one of the existing royal manors, and from Tudor times grants frequently used the name of the manor of East Greenwich, but some 17th-century grants named the castle of Windsor. Places in North America that have taken the name "East Greenwich" include a township in Gloucester County, New Jersey, a hamlet in Washington County, New York, and a town in Kent County, Rhode Island. Greenwich, Connecticut was also named after Greenwich. Tumuli to the south-west of Flamsteed House, in Greenwich Park, are thought to be early Bronze Age barrows re-used by the Saxons in the 6th century as burial grounds. To the east between the Vanbrugh and Maze Hill Gates is the site of a Roman villa or temple. A small area of red paving tesserae protected by railings marks the spot. It was excavated in 1902, and 300 coins were found dating from the emperors Claudius and Honorius to the 5th century. This was excavated by the Channel 4 television programme Time Team in 1999, broadcast in 2000, and further investigations were made by the same group in 2003. The Roman road from London to Dover, Watling Street, crossed the high ground to the south of Greenwich, through Blackheath. This followed the line of an earlier Celtic route from Canterbury to St Albans. As late as Henry V, Greenwich was only a fishing town, with a safe anchorage in the river. During the reign of Æthelred the Unready, the Danish fleet anchored in the River Thames off Greenwich for over three years, and the army encamped on the hill above. From here they attacked Kent and, in the year 1012, took the city of Canterbury, making Archbishop Alphege their prisoner for seven months in their camp at Greenwich, at that time within the county of Kent. They stoned him to death for his refusal to allow his ransom (3,000 pieces of silver) to be paid; and kept his body, until the reported blossoming of a stick that had been immersed in his blood. For this miracle his body was released to his followers, he achieved sainthood for his martyrdom and, in the 12th century, the parish church was dedicated to him. The present church on the site west of the town centre is St Alfege's Church, designed by Nicholas Hawksmoor in 1714 and completed in 1718. The Domesday Book of 1086 records the manor of Grenviz in the hundred of Grenviz as held by Bishop Odo of Bayeux; his lands were seized by the crown in 1082. The name of the hundred was changed to Blackheath when the site of the hundred court was moved there in the 12th century. There has been a royal palace or hunting lodge here since before 1300, when Edward I is known to have made offerings at the chapel of the Virgin Mary. Subsequent monarchs were regular visitors, with Henry IV making his will here, and Henry V granting the manor, for life, to Thomas Beaufort, Duke of Exeter, who died at Greenwich in 1426. The palace was created by Humphrey, Duke of Gloucester, Henry V's younger brother and regent to his son Henry VI in 1447; he enclosed the park and erected a tower (Greenwich Castle) on the hill now occupied by the Royal Observatory. The Thames-side palace was renamed the Palace of Placentia or Pleasaunce by Henry VI's consort Margaret of Anjou after Humphrey's death. The palace was completed and further enlarged by Edward IV, and in 1466 it was granted to his queen, Elizabeth. Edward IV had previously been given permission by the Pope to establish a Franciscan friary of Observant Friars in Greenwich, this was done in 1485, two years after his death; the first Observant House in England, it was located on land adjacent to the palace. After rejecting papal authority in 1534, the Franciscan Observants were suppressed; refounded as Franciscan Conventual, the friary was dissolved in 1538, then re-established in 1555 for Observants, before the friars were finally expelled in 1559 and the friary was demolished in 1662. Ultimately it was because the palace and its grounds were a royal possession (with a useful hill) that it was chosen as the site for Charles II's Royal Observatory, from which stemmed Greenwich's subsequent global role as originator of the modern Prime Meridian. The palace was the principal residence of Henry VII whose sons Henry (later Henry VIII) and Edmund Tudor were born here, and baptised in St Alphege's. Henry favoured Greenwich over nearby Eltham Palace, the former principal royal palace in south London, which was not on the River Thames and so was less accessible. Henry extended Greenwich Palace and it became his principal London seat until Whitehall Palace was built in the 1530s. Henry VIII married Catherine of Aragon and Anne of Cleves at Greenwich, and both of his daughters, Mary (18 February 1516) and Elizabeth (7 September 1533), were born at Greenwich. His son Edward VI also died there at age 15. The palace of Placentia, in turn, became Elizabeth's favourite summer residence. Both she and her sister Mary used the palace extensively, and Elizabeth's Council planned the Spanish Armada campaign there in 1588. James I carried out the final remodelling work on Greenwich Palace, granting the manor to his wife Queen Anne of Denmark. In 1616 Anne commissioned Inigo Jones to design and build the surviving Queen's House as the final addition to the palace. Charles I granted the manor to his wife Queen Henrietta Maria, for whom Inigo Jones completed the Queen's House. During the English Civil War, the palace was used as a biscuit factory and prisoner-of-war camp. Then, in the Interregnum, the palace and park were seized to become a 'mansion' for the Lord Protector. By the time of the Restoration, the Palace of Placentia had fallen into disuse and was pulled down. New buildings began to be established as a grand palace for Charles II, but only the King Charles block was completed. Charles II also redesigned and replanted Greenwich Park and founded and built the Royal Observatory. Prince James (later King James II & VII), as Duke of York and Lord High Admiral until 1673, was often at Greenwich with his brother Charles and, according to Samuel Pepys, he proposed the idea of creating a Royal Naval Hospital. This was eventually established at Greenwich by his daughter Mary II, who in 1692–1693 commissioned Christopher Wren to design the Royal Hospital for Seamen (now the Old Royal Naval College). The work was begun under her widower William III in 1696 and completed by Hawksmoor. Queen Anne and Prince George of Denmark continued to patronise the project. George I landed at Greenwich from Hanover on his accession in 1714. His successor George II granted the Royal Hospital for Seamen the forfeited estates of the Jacobite Earl of Derwentwater, which allowed the building to be completed by 1751. In 1805, George III granted the Queen's House to the Royal Naval Asylum (an orphanage school), which amalgamated in 1821–1825 with the Greenwich Hospital School. Extended with the buildings that now house the National Maritime Museum, it was renamed the Royal Hospital School by Queen Victoria in 1892. George IV donated nearly 40 paintings to the hospital in 1824, at a stroke creating a gallery in the Painted Hall. These now form the Greenwich Hospital Collection at the National Maritime Museum. Subsequently, William IV and Queen Adelaide were both regular donors and visitors to the gallery. Queen Victoria rarely visited Greenwich, but in 1845 her husband Prince Albert personally bought Nelson's Trafalgar coat for the Naval Gallery. In 1838 the London and Greenwich Railway (L&GR) completed the first steam railway in London. It started at London Bridge and had its terminus at London Street (now Greenwich High Road). It was also the first to be built specifically for passengers, and the first elevated railway, having 878 arches over its almost four mile stretch. South of the railway's viaduct over Deptford Creek is a Victorian pumping station constructed in 1864 as part of Joseph Bazalgette's London sewerage system (the Southern Outfall Sewer flows under Greenwich town centre). In 1853 the local Scottish Presbyterian community built a church, St Mark's, nearby which was extended twice in the 1860s during the ministry of Adolph Saphir, eventually accommodating 1,000 worshippers. In 1864 opposite the railway terminus, theatrical entrepreneur Sefton Parry built the thousand-seater New Greenwich Theatre. William Morton was one of its more successful managers. The theatre was demolished in 1937 to make way for a new Town Hall, now a listed building under new ownership and renamed Meridian House. Greenwich Station is at the northern apex of the Ashburnham Triangle, a residential estate developed by the Ashburnham family, mainly between 1830 and 1870, on land previously developed as market gardens. It is now a designated conservation area. The present Greenwich Theatre, further to the east, on Croom's Hill, was constructed inside the shell of a Victorian music hall. Beginning life in 1855 as an annexe to the Rose and Crown, the music hall was rebuilt in 1871 by Charles Crowder and subsequently operated under many names. Further south on Croom's Hill, the Roman Catholic church of Our Ladye Star of the Sea was opened in 1851. The meridian was established in 1851. George V and his wife Queen Mary both supported the creation of the National Maritime Museum, and Mary presented the museum with many items. Prince Albert, Duke of York (later George VI), laid the foundation stone of the new Royal Hospital School when it moved out to Holbrook, Suffolk. In 1937 his first public act as king, three weeks before his coronation, was to open the National Maritime Museum in the buildings vacated by the school. The king was accompanied by his mother Queen Mary, his wife Queen Elizabeth and Princess Elizabeth. Princess Elizabeth and her consort Prince Philip, who had been ennobled Duke of Edinburgh and Baron Greenwich on marriage in 1947, made their first public and official visit to Greenwich in 1948 to receive the Freedom of the Borough for Philip. In the same year, he became a trustee of the National Maritime Museum. Prince Philip was a trustee for 52 years until 2000, when he became its first patron. The Duke of Edinburgh was also a patron of the Cutty Sark (which was opened by the Queen in 1957) from 1952. During the Silver Jubilee of 1977, the Queen embarked at Greenwich for the Jubilee River Pageant. In 1987, she was aboard the P&O ship Pacific Princess when it moored alongside the Old Royal Naval College for the company's 150th-anniversary celebrations. To mark the Diamond Jubilee of Elizabeth II, on 3 February 2012 the Borough of Greenwich became the fourth London Borough to have Royal Borough status, the others being Kingston upon Thames, Kensington & Chelsea and Windsor & Maidenhead. The status was granted in recognition of the borough's historic links with the monarchy, the location of the Prime Meridian and its being a UNESCO World Heritage Site. Governance Greenwich is covered by the Greenwich West and Peninsula wards of the London Borough of Greenwich, which was formed in 1965 by merging the former Metropolitan Borough of Greenwich with that part of the Metropolitan Borough of Woolwich which lay south of the River Thames. Along with Blackheath Westcombe, Charlton, Glyndon, Woolwich Riverside, and Woolwich Common, it elects a Member of Parliament (MP) for Greenwich and Woolwich; Matthew Pennycook was elected as its MP in 2015 and later elections. The London Borough of Greenwich became the Royal Borough of Greenwich to mark the Diamond Jubilee of Elizabeth II on 3 February 2012 due to the links of the borough with the royal family. Greenwich is represented in the Greenwich London Borough Council by the councillors of the Greenwich Park Ward, East Greenwich Ward, and the Greenwich Creekside Ward. Demographics In comparison with Greater London as a whole, West Greenwich has a relatively higher proportion of residents identifying as White, particularly White British, while still remaining ethnically diverse by national standards. The ward's Black or Black British population, especially those identifying as Black African, is broadly in line with or slightly above the London average, reflecting established African diaspora communities in south-east London. Asian or Asian British residents form a smaller share of the population than across London overall, although the proportion identifying as Chinese is comparatively notable. Mixed or Multiple ethnic groups are represented at similar levels to the London average, while the proportion of residents identifying with Other ethnic groups is slightly lower. Overall, the ethnic composition of West Greenwich reflects a more residential inner-London profile, combining a strong White British presence with significant minority communities. Based on the 2011 census, the West Greenwich ward had a slightly higher proportion of residents identifying as Christian than Greater London overall, while the proportion reporting no religion was broadly similar. The share of residents identifying as Muslim was lower than the London average at that time, reflecting differences in the ward's demographic composition. Smaller religious groups, including Hindu, Buddhist, Jewish and Sikh communities, were present in proportions generally comparable with Greater London in 2011. As across London as a whole, a notable minority of residents in West Greenwich did not state a religious affiliation. The employment profile of West Greenwich residents is characterised by a strong concentration in service-based industries, particularly financial and insurance activities due to the transport links to Canary Wharf and the City of London. Professional, scientific and technical services account for a higher proportion of employment than the London average. Education and human health and social work activities also represent significant shares of local employment, reflecting the presence of public and institutional employers. In contrast, sectors such as construction, wholesale and retail trade, and transport and storage employ a smaller proportion of residents in West Greenwich compared with Greater London overall. Employment in primary and industrial sectors, including agriculture, mining, manufacturing and utilities, remains minimal, consistent with the ward's urban character. Geography The town of Greenwich is built on a broad platform to the south of the outside of a broad meander in the River Thames, with a safe deep water anchorage lying in the river. To the south, the land rises steeply, 100 feet (30 m) through Greenwich Park to the town of Blackheath. The higher areas consist of a sedimentary layer of gravelly soils, known as the Blackheath Beds, that spread through much of the south-east over a chalk outcrop—with sands, loam and seams of clay at the lower levels by the river. Greenwich is bordered by Deptford Creek and Deptford to the west; the residential area of Westcombe Park to the east; the River Thames to the north; and the A2 and Blackheath to the south. The Greenwich Peninsula, northeast of the town centre and also known as North Greenwich, forms the main projection of the town. Across the River Thames to the north lies the Isle of Dogs, a riverside district that includes Millwall, a largely residential area, and Canary Wharf, one of London's principal financial and business centres. To the west, beyond Deptford Creek, is Deptford, an area known for its maritime history, cultural diversity, and creative industries. To the east is Charlton, a predominantly residential district with historic landmarks and extensive green spaces. The Greenwich Peninsula lies to the north-east and is a major regeneration area, home to large-scale residential developments, entertainment venues, and the O2. To the south are Blackheath, an affluent village characterised by its large open heath, and Lewisham, a significant town centre and transport hub within south-east London undergoing redevelopment. To the south-west is St John's, a mainly residential neighbourhood with Victorian housing, while to the south-east lies Kidbrooke, an area with new housing and improved infrastructure. This data was collected between 2005 and 2015 at the weather station in Greenwich: Historically, the record high is 100 °F (38 °C) on 9 August 1911. This was the record for London until 2003, though it was disregarded due to non-standard instruments. Greenwich has an oceanic climate (Köppen: Cfb) with warm summers and cool winters. Sites of interest The Cutty Sark (a clipper ship) has been preserved in a dry dock by the river. A major fire in May 2007 destroyed a part of the ship, although much had already been removed for restoration. Nearby for many years was also displayed Gipsy Moth IV, the 54 feet (16.5 m) yacht sailed by Sir Francis Chichester in his single-handed, 226-day circumnavigation of the globe during 1966–67. In 2004, Gipsy Moth IV was removed from Greenwich, and after restoration work completed a second circumnavigation in May 2007. On the riverside in front of the north-west corner of the hospital is an obelisk erected in memory of Arctic explorer Joseph René Bellot. Near the Cutty Sark site, a cylindrical building contains the entrance to the Greenwich foot tunnel, opened on 4 August 1902. This connects Greenwich to the Isle of Dogs on the northern side of the River Thames. The north exit of the tunnel is at Island Gardens, from where the famous view of Greenwich Hospital painted by Canaletto can be seen. Rowing has been part of life on the river at Greenwich for hundreds of years and the first Greenwich Regatta was held in 1785. The annual Great River Race along the Thames Tideway finishes at the Cutty Sark. The nearby Trafalgar Rowing Centre in Crane Street is home to Curlew and Globe rowing clubs. The Old Royal Naval College is Sir Christopher Wren's domed masterpiece at the centre of the heritage site. The site is administered by the Greenwich Foundation and several of the buildings are let to the University of Greenwich and one, the King Charles block, to Trinity College of Music. Within the complex is the former college dining room, the Painted Hall, this was painted by James Thornhill, and the Chapel of St Peter and St Paul, with an interior designed by James 'Athenian' Stuart. The Naval College had a training reactor, the JASON reactor, within the King William building that was operational between 1962 and 1996. The reactor was decommissioned and removed in 1999. To the east of the Naval College is the Trinity Hospital almshouse, founded in 1613, the oldest surviving building in the town centre. This is next to the massive brick walls and the landing stage of Greenwich Power Station. Built between 1902 and 1910 as a coal-fired station to supply power to London's tram system, and later the London Underground, it is now oil- and gas-powered and serves as a backup station for London Underground. East Greenwich also has a small park, East Greenwich Pleasaunce, which was formerly the burial ground of Greenwich Hospital. The O2 (formerly the Millennium Dome) was built on part of the site of East Greenwich Gas Works, a disused British Gas site on the Greenwich Peninsula. It is next to North Greenwich Underground station, about 3 miles (4.8 km) north east from the Greenwich town centre, north west of Charlton. Pear Tree Wharf was associated with the gas works, being used to unload coal for the manufacturing of town gas, and is now home to the Greenwich Yacht Club. The Greenwich Millennium Village is a modern urban regeneration development to the south of the Dome. Enderby's Wharf is a site associated with submarine cable manufacture for over 150 years. South of the former Naval College is the National Maritime Museum housed in buildings forming another symmetrical group and grand arcade incorporating the Queen's House, designed by Inigo Jones. Continuing to the south, Greenwich Park is a Royal Park of 183 acres (0.7 km2), laid out in the 17th century and formed from the hunting grounds of the Royal Palace of Placentia. The park rises towards Blackheath and at the top of this hill is a statue of James Wolfe, commander of the British expedition to capture Quebec. Nearby a major group of buildings within the park includes the former Royal Observatory, Greenwich; the Prime Meridian passes through this building. Greenwich Mean Time was at one time based on the time observations made at the Royal Greenwich Observatory, before being superseded by the closely related Coordinated Universal Time (UTC). While there is no longer a working astronomical observatory at Greenwich, a ball still drops daily to mark the exact moment of 1 p.m., and there is a museum of astronomical and navigational tools, particularly John Harrison's marine chronometers. The Ranger's House lies at the Blackheath end of the park and houses the Wernher Collection of art, and many fine houses, including Vanbrugh's house lie on Maze Hill, on the western edge of the park. Around the covered Greenwich Market, Georgian and Victorian architecture dominates in the town centre which spreads to the west of the park and Royal Naval College. Up the hill from the centre, there are many streets of Georgian houses, including the Fan Museum, on Croom's Hill. Nearby, at the junction of Croom's Hill with Nevada Street, is Greenwich Theatre; at the eastern end of Nevada Street is the Greenwich Tavern. To the west, the arthouse Greenwich Cinema is on Greenwich High Road, while the nearby Greenwich Playhouse closed in 2012. There has been a market at Greenwich since the 14th century, but the history of the present market dates from 1700 when a charter to run two markets, on Wednesdays and Saturdays, was assigned by Lord Romney (Henry, Earl of Romney) to the Commissioners of Greenwich Hospital for 1000 years. The market is part of "the island site", bounded by College Approach, Greenwich Church Street, King William Walk and Nelson Road, near the National Maritime Museum and the Royal Observatory. The buildings surrounding the market are Grade 2 listed and were established in 1827–1833 under the direction of Joseph Kay. A market roof was added in 1902–1908 (and replaced in 2016). Later significant development occurred in 1958–1960 and during the 1980s. The landowner, Greenwich Hospital, enhanced the market between 2014 and early 2016. Following the COVID-19 pandemic in 2020 the rents for several of the market stalls were increased by up to 60% as Greenwich Hospital's managing agent Knight Frank said it was losing money with fewer stalls operating and only four days of trading a week. About 1.0 mile (1.6 km) east of Greenwich town centre, the Millennium Leisure Park is an out-of-town retail park on Bugsby's Way in east Greenwich. It consists of retail outlets (IKEA and B&Q), restaurants and an Odeon cinema. The IKEA store here opened in 2019 as the retailer's fourth main store in London, following stores in Wembley (1988), Croydon (1992) and Tottenham (2005); the Greenwich store is the first in Inner London. Greenwich Shopping Park is about 0.5 miles (0.8 km) further east, in Charlton. Greenwich Mean Time Greenwich Mean Time (GMT) is a term originally referring to mean solar time at the Royal Observatory, Greenwich, which overlooks the River Thames from a hill in Greenwich Park. GMT is commonly used in practice to refer to Coordinated Universal Time (UTC) when this is viewed as a time zone, especially by bodies connected with the United Kingdom, such as the BBC World Service, the Royal Navy, the Met Office and others, although strictly UTC is an atomic time scale which only approximates GMT with a tolerance of 0.9 second. It is also used to refer to Universal Time (UT), which is a standard astronomical concept used in many technical fields and is often referred to by the military in the phrase Zulu time. As the United Kingdom grew into an advanced maritime nation, British mariners kept at least one chronometer on GMT in order to calculate their longitude from the Greenwich meridian, which was by convention considered to have longitude zero degrees (this convention was internationally adopted in the International Meridian Conference of 1884).[note 1] The synchronization of the chronometer on GMT did not affect shipboard time itself, which was still solar time. But this practice, combined with mariners from other nations drawing from Nevil Maskelyne's method of lunar distances based on observations at Greenwich, eventually led to GMT being used worldwide as a reference time independent of location. Most time zones were based upon this reference as a number of hours and half-hours "ahead of GMT" or "behind GMT". In recognition of the suburb's astronomical links, Asteroid 2830 has been named 'Greenwich'. World Heritage Site In 1997 Maritime Greenwich was added to the list of World Heritage Sites by UNESCO, for the concentration and quality of buildings of historic and architectural interest. These can be divided into the group of buildings along the riverfront, Greenwich Park and the Georgian and Victorian town centre. The World Heritage Site is described as: The ensemble of buildings at Greenwich, an outlying district of London, and the park in which they are set, symbolize English artistic and scientific endeavour in the 17th and 18th centuries. The Queen's House (by Inigo Jones) was the first Palladian building in England, while the complex that was until recently the Royal Naval College was designed by Christopher Wren. The park, laid out on the basis of an original design by André Le Nôtre, contains the Old Royal Observatory, the work of Wren and the scientist Robert Hooke. — UNESCO The area was selected by UNESCO on the criteria (i)(ii)(iv)(vi), recognizing it as a masterpiece of human creative genius, an important expression of intercultural exchange, an outstanding example of a significant stage in human history, and as being directly associated with events, traditions, or ideas of outstanding universal significance. Following the designation, property values in the surrounding area rose sharply, reflecting increased international visibility, tourism, and investor interest associated with World Heritage status. Greenwich in particular experienced accelerated gentrification, and the area west of Greenwich Park – notably within the West Greenwich Conservation Area – increasingly developed into some of the more affluent residential districts of south-east London, with rising house prices, redevelopment of historic buildings, and an influx of higher-income residents reshaping the local social and economic profile. In 2025, The Telegraph named West Greenwich as the 4th best place to live in London (after Richmond Green, Marylebone, and Hampstead). Discover Greenwich Visitor Centre The Discover Greenwich Visitor Centre provides an introduction to the history and attractions in the Greenwich World Heritage Site. It is in The Pepys Building near to the Cutty Sark within the grounds of the Old Royal Naval College (formerly Greenwich Hospital); the building began life as an engineering laboratory for the college. The centre opened in March 2010, and admission is free. The Centre explains the history of Greenwich as a royal residence and a maritime centre. Exhibits include: Education The University of Greenwich main campus occupies most of the grand, landmark riverside vista buildings of the former Royal Naval College. The university has other campuses at Avery Hill in Eltham and at Medway. The Greenwich campus also houses the Trinity College of Music. Secondary schools in the area include The John Roan School, founded 1677, and St Ursula's Convent School, established 1850. Transport Greenwich is served by Greenwich and Maze Hill stations with Southeastern services to London Cannon Street, Dartford, Barnehurst and Crayford as well as Thameslink services to Luton via London Blackfriars and to Rainham. The area is also served by North Greenwich station with Jubilee Line services to Stanmore and Stratford. Greenwich is served by the Docklands Light Railway, with services from Greenwich and Cutty Sark stations to Bank via Canary Wharf and to Lewisham. Greenwich is served by many London Buses routes. There are a number of river boat services running from Greenwich Pier, managed by London River Services. The main services include the Thames commuter catamaran service run by Thames Clippers from Embankment, via Tower Millennium Pier, Canary Wharf and on to the O2 and Woolwich Arsenal Pier; the Westminster-Greenwich cruise service by Thames River Services; and the City Cruises tourist cruise via Westminster, Waterloo and Tower piers. The Thames Path National Trail runs along the riverside. The Greenwich foot tunnel provides pedestrian access to the southern end of the Isle of Dogs, across the river Thames. The National Cycle Network Route 1 includes the foot tunnel, though cycling is not permitted in the tunnel itself. Sports Greenwich is home to a variety of amateur sports clubs. Its location on the tidal Thames makes it a good location for rowing; the Trafalgar Rowing Centre in Crane Street is the clubhouse of the Curlew and Globe rowing clubs. The Globe has senior and junior squads, the latter renowned for its achievements at national and international level. The Thames Path and Greenwich Park are popular with runners. The 'red start' for the London Marathon is situated south of the park on Charlton Way (other starts are nearby in St John's Park, and on Shooter's Hill Road). After heading east through Charlton and Woolwich, the marathon route then turns west towards Greenwich; as runners reach the 10 km mark (6.2-mile), they pass the Old Royal Naval College and then loop around the prow of the Cutty Sark before continuing west towards Deptford. The Greenwich Peninsula Golf Range at North Greenwich is a riverside golf driving range with 60 bays, a mini 18-hole adventure course, golf academy, golf shop and restaurant. Media Greenwich's musical heritage also includes the historic role of music in its institutions. The Royal Borough is home to the Trinity Laban Conservatoire of Music and Dance, a leading UK conservatoire formed by the 2005 merger of the Trinity College of Music and the Laban Dance Centre; its Greenwich site has educated internationally recognised musicians and composers. The Queen's House hosts classical concerts and early music performances, including Renaissance and Baroque programmes in its Great Hall. Greenwich has a long tradition of performance venues influencing its musical life. St Alfege Church has a long tradition of sacred and choral music, hosting regular services with sung liturgy, a well-regarded choir led by a Director of Music, and free lunchtime recitals that feature classical and vocal performances; the church is also notable as the burial place of the 16th-century composer Thomas Tallis. The Greenwich Theatre traces its origins to the 19th century as the Rose and Crown Music Hall and later Crowder's Music Hall, showcasing concerts, burlesque and variety acts. The wider cultural scene also intersects with music through multidisciplinary festivals like the Greenwich+Docklands International Festival, which blends outdoor performance art with music during its annual summer programme. Greenwich has also been referenced directly in contemporary jazz work: South London keyboardist, composer and producer James Beckwith titled his album SE10 after the Greenwich postcode, describing it as dedicated to the Greenwich area where he lives and studied at Trinity Laban. Inheriting from the dynamic electronic music scene in south-east London, the area has also been used as a setting for live music and promotional events in the genre. The Royal Naval College is the location of the electronic music festival Labyrinth on the Thames, using the river and its surroundings in Greenwich as a backdrop for an immersive musical experience. Its past headliners include the Grammy Award–winning DJ Black Coffee, the Australian electronic duo Empire of the Sun, deep house and techno DJ Solomun, and dance music artist FISHER. More recent and announced performers include electronic acts Moby, Dom Dolla and Overmono, reflecting the festival's mix of internationally renowned DJs and live electronic acts. Greenwich, particularly the World Heritage Site and the Royal Naval College, have been used extensively as filming locations for film and television productions. Notable productions filmed at the Royal Naval College include Thor: The Dark World, Les Misérables, Skyfall, The Dark Knight Rises, and the television series Bridgerton and The Crown. Bridgerton was also filmed at the Ranger's House. In January 2026, entertainment press reported that filming for Avengers: Doomsday took place in Greenwich, including at the Royal Naval College. Greenwich Picturehouse is an independent cinema on Greenwich High Road that has played a notable role in the area's film culture. It originally opened in 1989 as the Greenwich Cinema, a purpose-built multi-screen venue, but closed in 2002 following increased competition from larger multiplex cinemas. The building was subsequently refurbished and reopened in 2005 as Greenwich Picturehouse, expanding to five screens and adding a café and bar. Since its reopening, the cinema has become an established cultural venue in the borough, screening a mix of mainstream releases, independent and foreign-language films, documentaries, and special events, and hosting festivals and community-focused programmes that contribute to Greenwich's wider cultural life. Notable residents See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/w/index.php?title=OpenAI&action=edit&section=11] | [TOKENS: 1430]
Editing OpenAI (section) Copy and paste: – — ° ′ ″ ≈ ≠ ≤ ≥ ± − × ÷ ← → · § Cite your sources: <ref></ref> {{}} {{{}}} | [] [[]] [[Category:]] #REDIRECT [[]] &nbsp; <s></s> <sup></sup> <sub></sub> <code></code> <pre></pre> <blockquote></blockquote> <ref></ref> <ref name="" /> {{Reflist}} <references /> <includeonly></includeonly> <noinclude></noinclude> {{DEFAULTSORT:}} <nowiki></nowiki> <!-- --> <span class="plainlinks"></span> Symbols: ~ | ¡ ¿ † ‡ ↔ ↑ ↓ • ¶ # ∞ ‹› «» ¤ ₳ ฿ ₵ ¢ ₡ ₢ $ ₫ ₯ € ₠ ₣ ƒ ₴ ₭ ₤ ℳ ₥ ₦ ₧ ₰ £ ៛ ₨ ₪ ৳ ₮ ₩ ¥ ♠ ♣ ♥ ♦ 𝄫 ♭ ♮ ♯ 𝄪 © ¼ ½ ¾ Latin: A a Á á À à  â Ä ä Ǎ ǎ Ă ă Ā ā à ã Å å Ą ą Æ æ Ǣ ǣ B b C c Ć ć Ċ ċ Ĉ ĉ Č č Ç ç D d Ď ď Đ đ Ḍ ḍ Ð ð E e É é È è Ė ė Ê ê Ë ë Ě ě Ĕ ĕ Ē ē Ẽ ẽ Ę ę Ẹ ẹ Ɛ ɛ Ǝ ǝ Ə ə F f G g Ġ ġ Ĝ ĝ Ğ ğ Ģ ģ H h Ĥ ĥ Ħ ħ Ḥ ḥ I i İ ı Í í Ì ì Î î Ï ï Ǐ ǐ Ĭ ĭ Ī ī Ĩ ĩ Į į Ị ị J j Ĵ ĵ K k Ķ ķ L l Ĺ ĺ Ŀ ŀ Ľ ľ Ļ ļ Ł ł Ḷ ḷ Ḹ ḹ M m Ṃ ṃ N n Ń ń Ň ň Ñ ñ Ņ ņ Ṇ ṇ Ŋ ŋ O o Ó ó Ò ò Ô ô Ö ö Ǒ ǒ Ŏ ŏ Ō ō Õ õ Ǫ ǫ Ọ ọ Ő ő Ø ø Œ œ Ɔ ɔ P p Q q R r Ŕ ŕ Ř ř Ŗ ŗ Ṛ ṛ Ṝ ṝ S s Ś ś Ŝ ŝ Š š Ş ş Ș ș Ṣ ṣ ß T t Ť ť Ţ ţ Ț ț Ṭ ṭ Þ þ U u Ú ú Ù ù Û û Ü ü Ǔ ǔ Ŭ ŭ Ū ū Ũ ũ Ů ů Ų ų Ụ ụ Ű ű Ǘ ǘ Ǜ ǜ Ǚ ǚ Ǖ ǖ V v W w Ŵ ŵ X x Y y Ý ý Ŷ ŷ Ÿ ÿ Ỹ ỹ Ȳ ȳ Z z Ź ź Ż ż Ž ž ß Ð ð Þ þ Ŋ ŋ Ə ə Greek: Ά ά Έ έ Ή ή Ί ί Ό ό Ύ ύ Ώ ώ Α α Β β Γ γ Δ δ Ε ε Ζ ζ Η η Θ θ Ι ι Κ κ Λ λ Μ μ Ν ν Ξ ξ Ο ο Π π Ρ ρ Σ σ ς Τ τ Υ υ Φ φ Χ χ Ψ ψ Ω ω {{Polytonic|}} Cyrillic: А а Б б В в Г г Ґ ґ Ѓ ѓ Д д Ђ ђ Е е Ё ё Є є Ж ж З з Ѕ ѕ И и І і Ї ї Й й Ј ј К к Ќ ќ Л л Љ љ М м Н н Њ њ О о П п Р р С с Т т Ћ ћ У у Ў ў Ф ф Х х Ц ц Ч ч Џ џ Ш ш Щ щ Ъ ъ Ы ы Ь ь Э э Ю ю Я я ́ IPA: t̪ d̪ ʈ ɖ ɟ ɡ ɢ ʡ ʔ ɸ β θ ð ʃ ʒ ɕ ʑ ʂ ʐ ç ʝ ɣ χ ʁ ħ ʕ ʜ ʢ ɦ ɱ ɳ ɲ ŋ ɴ ʋ ɹ ɻ ɰ ʙ ⱱ ʀ ɾ ɽ ɫ ɬ ɮ ɺ ɭ ʎ ʟ ɥ ʍ ɧ ʼ ɓ ɗ ʄ ɠ ʛ ʘ ǀ ǃ ǂ ǁ ɨ ʉ ɯ ɪ ʏ ʊ ø ɘ ɵ ɤ ə ɚ ɛ œ ɜ ɝ ɞ ʌ ɔ æ ɐ ɶ ɑ ɒ ʰ ʱ ʷ ʲ ˠ ˤ ⁿ ˡ ˈ ˌ ː ˑ ̪ {{IPA|}} This page is a member of 8 hidden categories (help):
========================================