text
stringlengths
0
473k
[SOURCE: https://en.wikipedia.org/wiki/United_States#cite_note-292] | [TOKENS: 17273]
Contents United States The United States of America (USA), also known as the United States (U.S.) or America, is a country primarily located in North America. It is a federal republic of 50 states and a federal capital district, Washington, D.C. The 48 contiguous states border Canada to the north and Mexico to the south, with the semi-exclave of Alaska in the northwest and the archipelago of Hawaii in the Pacific Ocean. The United States also asserts sovereignty over five major island territories and various uninhabited islands in Oceania and the Caribbean.[j] It is a megadiverse country, with the world's third-largest land area[c] and third-largest population, exceeding 341 million.[k] Paleo-Indians first migrated from North Asia to North America at least 15,000 years ago, and formed various civilizations. Spanish colonization established Spanish Florida in 1513, the first European colony in what is now the continental United States. British colonization followed with the 1607 settlement of Virginia, the first of the Thirteen Colonies. Enslavement of Africans was practiced in all colonies by 1770 and supplied most of the labor for the Southern Colonies' plantation economy. Clashes with the British Crown began as a civil protest over the illegality of taxation without representation in Parliament and the denial of other English rights. They evolved into the American Revolution, which led to the Declaration of Independence and a society based on universal rights. Victory in the 1775–1783 Revolutionary War brought international recognition of U.S. sovereignty and fueled westward expansion, further dispossessing native inhabitants. As more states were admitted, a North–South division over slavery led the Confederate States of America to declare secession and fight the Union in the 1861–1865 American Civil War. With the United States' victory and reunification, slavery was abolished nationally. By the late 19th century, the U.S. economy outpaced the French, German and British economies combined. As of 1900, the country had established itself as a great power, a status solidified after its involvement in World War I. Following Japan's attack on Pearl Harbor in 1941, the U.S. entered World War II. Its aftermath left the U.S. and the Soviet Union as rival superpowers, competing for ideological dominance and international influence during the Cold War. The Soviet Union's collapse in 1991 ended the Cold War, leaving the U.S. as the world's sole superpower. The U.S. federal government is a representative democracy with a president and a constitution that grants separation of powers under three branches: legislative, executive, and judicial. The United States Congress is a bicameral national legislature composed of the House of Representatives (a lower house based on population) and the Senate (an upper house based on equal representation for each state). Federalism grants substantial autonomy to the 50 states. In addition, 574 Native American tribes have sovereignty rights, and there are 326 Native American reservations. Since the 1850s, the Democratic and Republican parties have dominated American politics. American ideals and values are based on a democratic tradition inspired by the American Enlightenment movement. A developed country, the U.S. ranks high in economic competitiveness, innovation, and higher education. Accounting for over a quarter of nominal global GDP, its economy has been the world's largest since about 1890. It is the wealthiest country, with the highest disposable household income per capita among OECD members, though its wealth inequality is highly pronounced. Shaped by centuries of immigration, the culture of the U.S. is diverse and globally influential. Making up more than a third of global military spending, the country has one of the strongest armed forces and is a designated nuclear state. A member of numerous international organizations, the U.S. plays a major role in global political, cultural, economic, and military affairs. Etymology Documented use of the phrase "United States of America" dates back to January 2, 1776. On that day, Stephen Moylan, a Continental Army aide to General George Washington, wrote a letter to Joseph Reed, Washington's aide-de-camp, seeking to go "with full and ample powers from the United States of America to Spain" to seek assistance in the Revolutionary War effort. The first known public usage is an anonymous essay published in the Williamsburg newspaper The Virginia Gazette on April 6, 1776. Sometime on or after June 11, 1776, Thomas Jefferson wrote "United States of America" in a rough draft of the Declaration of Independence, which was adopted by the Second Continental Congress on July 4, 1776. The term "United States" and its initialism "U.S.", used as nouns or as adjectives in English, are common short names for the country. The initialism "USA", a noun, is also common. "United States" and "U.S." are the established terms throughout the U.S. federal government, with prescribed rules.[l] "The States" is an established colloquial shortening of the name, used particularly from abroad; "stateside" is the corresponding adjective or adverb. "America" is the feminine form of the first word of Americus Vesputius, the Latinized name of Italian explorer Amerigo Vespucci (1454–1512);[m] it was first used as a place name by the German cartographers Martin Waldseemüller and Matthias Ringmann in 1507.[n] Vespucci first proposed that the West Indies discovered by Christopher Columbus in 1492 were part of a previously unknown landmass and not among the Indies at the eastern limit of Asia. In English, the term "America" usually does not refer to topics unrelated to the United States, despite the usage of "the Americas" to describe the totality of the continents of North and South America. History The first inhabitants of North America migrated from Siberia approximately 15,000 years ago, either across the Bering land bridge or along the now-submerged Ice Age coastline. Small isolated groups of hunter-gatherers are said to have migrated alongside herds of large herbivores far into Alaska, with ice-free corridors developing along the Pacific coast and valleys of North America in c. 16,500 – c. 13,500 BCE (c. 18,500 – c. 15,500 BP). The Clovis culture, which appeared around 11,000 BCE, is believed to be the first widespread culture in the Americas. Over time, Indigenous North American cultures grew increasingly sophisticated, and some, such as the Mississippian culture, developed agriculture, architecture, and complex societies. In the post-archaic period, the Mississippian cultures were located in the midwestern, eastern, and southern regions, and the Algonquian in the Great Lakes region and along the Eastern Seaboard, while the Hohokam culture and Ancestral Puebloans inhabited the Southwest. Native population estimates of what is now the United States before the arrival of European colonizers range from around 500,000 to nearly 10 million. Christopher Columbus began exploring the Caribbean for Spain in 1492, leading to Spanish-speaking settlements and missions from what are now Puerto Rico and Florida to New Mexico and California. The first Spanish colony in the present-day continental United States was Spanish Florida, chartered in 1513. After several settlements failed there due to starvation and disease, Spain's first permanent town, Saint Augustine, was founded in 1565. France established its own settlements in French Florida in 1562, but they were either abandoned (Charlesfort, 1578) or destroyed by Spanish raids (Fort Caroline, 1565). Permanent French settlements were founded much later along the Great Lakes (Fort Detroit, 1701), the Mississippi River (Saint Louis, 1764) and especially the Gulf of Mexico (New Orleans, 1718). Early European colonies also included the thriving Dutch colony of New Nederland (settled 1626, present-day New York) and the small Swedish colony of New Sweden (settled 1638 in what became Delaware). British colonization of the East Coast began with the Virginia Colony (1607) and the Plymouth Colony (Massachusetts, 1620). The Mayflower Compact in Massachusetts and the Fundamental Orders of Connecticut established precedents for local representative self-governance and constitutionalism that would develop throughout the American colonies. While European settlers in what is now the United States experienced conflicts with Native Americans, they also engaged in trade, exchanging European tools for food and animal pelts.[o] Relations ranged from close cooperation to warfare and massacres. The colonial authorities often pursued policies that forced Native Americans to adopt European lifestyles, including conversion to Christianity. Along the eastern seaboard, settlers trafficked Africans through the Atlantic slave trade, largely to provide manual labor on plantations. The original Thirteen Colonies[p] that would later found the United States were administered as possessions of the British Empire by Crown-appointed governors, though local governments held elections open to most white male property owners. The colonial population grew rapidly from Maine to Georgia, eclipsing Native American populations; by the 1770s, the natural increase of the population was such that only a small minority of Americans had been born overseas. The colonies' distance from Britain facilitated the entrenchment of self-governance, and the First Great Awakening, a series of Christian revivals, fueled colonial interest in guaranteed religious liberty. Following its victory in the French and Indian War, Britain began to assert greater control over local affairs in the Thirteen Colonies, resulting in growing political resistance. One of the primary grievances of the colonists was the denial of their rights as Englishmen, particularly the right to representation in the British government that taxed them. To demonstrate their dissatisfaction and resolve, the First Continental Congress met in 1774 and passed the Continental Association, a colonial boycott of British goods enforced by local "committees of safety" that proved effective. The British attempt to then disarm the colonists resulted in the 1775 Battles of Lexington and Concord, igniting the American Revolutionary War. At the Second Continental Congress, the colonies appointed George Washington commander-in-chief of the Continental Army, and created a committee that named Thomas Jefferson to draft the Declaration of Independence. Two days after the Second Continental Congress passed the Lee Resolution to create an independent, sovereign nation, the Declaration was adopted on July 4, 1776. The political values of the American Revolution evolved from an armed rebellion demanding reform within an empire to a revolution that created a new social and governing system founded on the defense of liberty and the protection of inalienable natural rights; sovereignty of the people; republicanism over monarchy, aristocracy, and other hereditary political power; civic virtue; and an intolerance of political corruption. The Founding Fathers of the United States, who included Washington, Jefferson, John Adams, Benjamin Franklin, Alexander Hamilton, John Jay, James Madison, Thomas Paine, and many others, were inspired by Classical, Renaissance, and Enlightenment philosophies and ideas. Though in practical effect since its drafting in 1777, the Articles of Confederation was ratified in 1781 and formally established a decentralized government that operated until 1789. After the British surrender at the siege of Yorktown in 1781, American sovereignty was internationally recognized by the Treaty of Paris (1783), through which the U.S. gained territory stretching west to the Mississippi River, north to present-day Canada, and south to Spanish Florida. The Northwest Ordinance (1787) established the precedent by which the country's territory would expand with the admission of new states, rather than the expansion of existing states. The U.S. Constitution was drafted at the 1787 Constitutional Convention to overcome the limitations of the Articles. It went into effect in 1789, creating a federal republic governed by three separate branches that together formed a system of checks and balances. George Washington was elected the country's first president under the Constitution, and the Bill of Rights was adopted in 1791 to allay skeptics' concerns about the power of the more centralized government. His resignation as commander-in-chief after the Revolutionary War and his later refusal to run for a third term as the country's first president established a precedent for the supremacy of civil authority in the United States and the peaceful transfer of power. In the late 18th century, American settlers began to expand westward in larger numbers, many with a sense of manifest destiny. The Louisiana Purchase of 1803 from France nearly doubled the territory of the United States. Lingering issues with Britain remained, leading to the War of 1812, which was fought to a draw. Spain ceded Florida and its Gulf Coast territory in 1819. The Missouri Compromise of 1820, which admitted Missouri as a slave state and Maine as a free state, attempted to balance the desire of northern states to prevent the expansion of slavery into new territories with that of southern states to extend it there. Primarily, the compromise prohibited slavery in all other lands of the Louisiana Purchase north of the 36°30′ parallel. As Americans expanded further into territory inhabited by Native Americans, the federal government implemented policies of Indian removal or assimilation. The most significant such legislation was the Indian Removal Act of 1830, a key policy of President Andrew Jackson. It resulted in the Trail of Tears (1830–1850), in which an estimated 60,000 Native Americans living east of the Mississippi River were forcibly removed and displaced to lands far to the west, causing 13,200 to 16,700 deaths along the forced march. Settler expansion as well as this influx of Indigenous peoples from the East resulted in the American Indian Wars west of the Mississippi. During the colonial period, slavery became legal in all the Thirteen colonies, but by 1770 it provided the main labor force in the large-scale, agriculture-dependent economies of the Southern Colonies from Maryland to Georgia. The practice began to be significantly questioned during the American Revolution, and spurred by an active abolitionist movement that had reemerged in the 1830s, states in the North enacted laws to prohibit slavery within their boundaries. At the same time, support for slavery had strengthened in Southern states, with widespread use of inventions such as the cotton gin (1793) having made slavery immensely profitable for Southern elites. The United States annexed the Republic of Texas in 1845, and the 1846 Oregon Treaty led to U.S. control of the present-day American Northwest. Dispute with Mexico over Texas led to the Mexican–American War (1846–1848). After the victory of the U.S., Mexico recognized U.S. sovereignty over Texas, New Mexico, and California in the 1848 Mexican Cession; the cession's lands also included the future states of Nevada, Colorado and Utah. The California gold rush of 1848–1849 spurred a huge migration of white settlers to the Pacific coast, leading to even more confrontations with Native populations. One of the most violent, the California genocide of thousands of Native inhabitants, lasted into the mid-1870s. Additional western territories and states were created. Throughout the 1850s, the sectional conflict regarding slavery was further inflamed by national legislation in the U.S. Congress and decisions of the Supreme Court. In Congress, the Fugitive Slave Act of 1850 mandated the forcible return to their owners in the South of slaves taking refuge in non-slave states, while the Kansas–Nebraska Act of 1854 effectively gutted the anti-slavery requirements of the Missouri Compromise. In its Dred Scott decision of 1857, the Supreme Court ruled against a slave brought into non-slave territory, simultaneously declaring the entire Missouri Compromise to be unconstitutional. These and other events exacerbated tensions between North and South that would culminate in the American Civil War (1861–1865). Beginning with South Carolina, 11 slave-state governments voted to secede from the United States in 1861, joining to create the Confederate States of America. All other state governments remained loyal to the Union.[q] War broke out in April 1861 after the Confederacy bombarded Fort Sumter. Following the Emancipation Proclamation on January 1, 1863, many freed slaves joined the Union army. The war began to turn in the Union's favor following the 1863 Siege of Vicksburg and Battle of Gettysburg, and the Confederates surrendered in 1865 after the Union's victory in the Battle of Appomattox Court House. Efforts toward reconstruction in the secessionist South had begun as early as 1862, but it was only after President Lincoln's assassination that the three Reconstruction Amendments to the Constitution were ratified to protect civil rights. The amendments codified nationally the abolition of slavery and involuntary servitude except as punishment for crimes, promised equal protection under the law for all persons, and prohibited discrimination on the basis of race or previous enslavement. As a result, African Americans took an active political role in ex-Confederate states in the decade following the Civil War. The former Confederate states were readmitted to the Union, beginning with Tennessee in 1866 and ending with Georgia in 1870. National infrastructure, including transcontinental telegraph and railroads, spurred growth in the American frontier. This was accelerated by the Homestead Acts, through which nearly 10 percent of the total land area of the United States was given away free to some 1.6 million homesteaders. From 1865 through 1917, an unprecedented stream of immigrants arrived in the United States, including 24.4 million from Europe. Most came through the Port of New York, as New York City and other large cities on the East Coast became home to large Jewish, Irish, and Italian populations. Many Northern Europeans as well as significant numbers of Germans and other Central Europeans moved to the Midwest. At the same time, about one million French Canadians migrated from Quebec to New England. During the Great Migration, millions of African Americans left the rural South for urban areas in the North. Alaska was purchased from Russia in 1867. The Compromise of 1877 is generally considered the end of the Reconstruction era, as it resolved the electoral crisis following the 1876 presidential election and led President Rutherford B. Hayes to reduce the role of federal troops in the South. Immediately, the Redeemers began evicting the Carpetbaggers and quickly regained local control of Southern politics in the name of white supremacy. African Americans endured a period of heightened, overt racism following Reconstruction, a time often considered the nadir of American race relations. A series of Supreme Court decisions, including Plessy v. Ferguson, emptied the Fourteenth and Fifteenth Amendments of their force, allowing Jim Crow laws in the South to remain unchecked, sundown towns in the Midwest, and segregation in communities across the country, which would be reinforced in part by the policy of redlining later adopted by the federal Home Owners' Loan Corporation. An explosion of technological advancement, accompanied by the exploitation of cheap immigrant labor, led to rapid economic expansion during the Gilded Age of the late 19th century. It continued into the early 20th, when the United States already outpaced the economies of Britain, France, and Germany combined. This fostered the amassing of power by a few prominent industrialists, largely by their formation of trusts and monopolies to prevent competition. Tycoons led the nation's expansion in the railroad, petroleum, and steel industries. The United States emerged as a pioneer of the automotive industry. These changes resulted in significant increases in economic inequality, slum conditions, and social unrest, creating the environment for labor unions and socialist movements to begin to flourish. This period eventually ended with the advent of the Progressive Era, which was characterized by significant economic and social reforms. Pro-American elements in Hawaii overthrew the Hawaiian monarchy; the islands were annexed in 1898. That same year, Puerto Rico, the Philippines, and Guam were ceded to the U.S. by Spain after the latter's defeat in the Spanish–American War. (The Philippines was granted full independence from the U.S. on July 4, 1946, following World War II. Puerto Rico and Guam have remained U.S. territories.) American Samoa was acquired by the United States in 1900 after the Second Samoan Civil War. The U.S. Virgin Islands were purchased from Denmark in 1917. The United States entered World War I alongside the Allies in 1917 helping to turn the tide against the Central Powers. In 1920, a constitutional amendment granted nationwide women's suffrage. During the 1920s and 1930s, radio for mass communication and early television transformed communications nationwide. The Wall Street Crash of 1929 triggered the Great Depression, to which President Franklin D. Roosevelt responded with the New Deal plan of "reform, recovery and relief", a series of unprecedented and sweeping recovery programs and employment relief projects combined with financial reforms and regulations. Initially neutral during World War II, the U.S. began supplying war materiel to the Allies of World War II in March 1941 and entered the war in December after Japan's attack on Pearl Harbor. Agreeing to a "Europe first" policy, the U.S. concentrated its wartime efforts on Japan's allies Italy and Germany until their final defeat in May 1945. The U.S. developed the first nuclear weapons and used them against the Japanese cities of Hiroshima and Nagasaki in August 1945, ending the war. The United States was one of the "Four Policemen" who met to plan the post-war world, alongside the United Kingdom, the Soviet Union, and China. The U.S. emerged relatively unscathed from the war, with even greater economic power and international political influence. The end of World War II in 1945 left the U.S. and the Soviet Union as superpowers, each with its own political, military, and economic sphere of influence. Geopolitical tensions between the two superpowers soon led to the Cold War. The U.S. implemented a policy of containment intended to limit the Soviet Union's sphere of influence; engaged in regime change against governments perceived to be aligned with the Soviets; and prevailed in the Space Race, which culminated with the first crewed Moon landing in 1969. Domestically, the U.S. experienced economic growth, urbanization, and population growth following World War II. The civil rights movement emerged, with Martin Luther King Jr. becoming a prominent leader in the early 1960s. The Great Society plan of President Lyndon B. Johnson's administration resulted in groundbreaking and broad-reaching laws, policies and a constitutional amendment to counteract some of the worst effects of lingering institutional racism. The counterculture movement in the U.S. brought significant social changes, including the liberalization of attitudes toward recreational drug use and sexuality. It also encouraged open defiance of the military draft (leading to the end of conscription in 1973) and wide opposition to U.S. intervention in Vietnam, with the U.S. totally withdrawing in 1975. A societal shift in the roles of women was significantly responsible for the large increase in female paid labor participation starting in the 1970s, and by 1985 the majority of American women aged 16 and older were employed. The Fall of Communism and the dissolution of the Soviet Union from 1989 to 1991 marked the end of the Cold War and left the United States as the world's sole superpower. This cemented the United States' global influence, reinforcing the concept of the "American Century" as the U.S. dominated international political, cultural, economic, and military affairs. The 1990s saw the longest recorded economic expansion in American history, a dramatic decline in U.S. crime rates, and advances in technology. Throughout this decade, technological innovations such as the World Wide Web, the evolution of the Pentium microprocessor in accordance with Moore's law, rechargeable lithium-ion batteries, the first gene therapy trial, and cloning either emerged in the U.S. or were improved upon there. The Human Genome Project was formally launched in 1990, while Nasdaq became the first stock market in the United States to trade online in 1998. In the Gulf War of 1991, an American-led international coalition of states expelled an Iraqi invasion force that had occupied neighboring Kuwait. The September 11 attacks on the United States in 2001 by the pan-Islamist militant organization al-Qaeda led to the war on terror and subsequent military interventions in Afghanistan and in Iraq. The U.S. housing bubble culminated in 2007 with the Great Recession, the largest economic contraction since the Great Depression. In the 2010s and early 2020s, the United States has experienced increased political polarization and democratic backsliding. The country's polarization was violently reflected in the January 2021 Capitol attack, when a mob of insurrectionists entered the U.S. Capitol and sought to prevent the peaceful transfer of power in an attempted self-coup d'état. Geography The United States is the world's third-largest country by total area behind Russia and Canada.[c] The 48 contiguous states and the District of Columbia have a combined area of 3,119,885 square miles (8,080,470 km2). In 2021, the United States had 8% of the Earth's permanent meadows and pastures and 10% of its cropland. Starting in the east, the coastal plain of the Atlantic seaboard gives way to inland forests and rolling hills in the Piedmont plateau region. The Appalachian Mountains and the Adirondack Massif separate the East Coast from the Great Lakes and the grasslands of the Midwest. The Mississippi River System, the world's fourth-longest river system, runs predominantly north–south through the center of the country. The flat and fertile prairie of the Great Plains stretches to the west, interrupted by a highland region in the southeast. The Rocky Mountains, west of the Great Plains, extend north to south across the country, peaking at over 14,000 feet (4,300 m) in Colorado. The supervolcano underlying Yellowstone National Park in the Rocky Mountains, the Yellowstone Caldera, is the continent's largest volcanic feature. Farther west are the rocky Great Basin and the Chihuahuan, Sonoran, and Mojave deserts. In the northwest corner of Arizona, carved by the Colorado River, is the Grand Canyon, a steep-sided canyon and popular tourist destination known for its overwhelming visual size and intricate, colorful landscape. The Cascade and Sierra Nevada mountain ranges run close to the Pacific coast. The lowest and highest points in the contiguous United States are in the State of California, about 84 miles (135 km) apart. At an elevation of 20,310 feet (6,190.5 m), Alaska's Denali (also called Mount McKinley) is the highest peak in the country and on the continent. Active volcanoes in the U.S. are common throughout Alaska's Alexander and Aleutian Islands. Located entirely outside North America, the archipelago of Hawaii consists of volcanic islands, physiographically and ethnologically part of the Polynesian subregion of Oceania. In addition to its total land area, the United States has one of the world's largest marine exclusive economic zones spanning approximately 4.5 million square miles (11.7 million km2) of ocean. With its large size and geographic variety, the United States includes most climate types. East of the 100th meridian, the climate ranges from humid continental in the north to humid subtropical in the south. The western Great Plains are semi-arid. Many mountainous areas of the American West have an alpine climate. The climate is arid in the Southwest, Mediterranean in coastal California, and oceanic in coastal Oregon, Washington, and southern Alaska. Most of Alaska is subarctic or polar. Hawaii, the southern tip of Florida and U.S. territories in the Caribbean and Pacific are tropical. The United States receives more high-impact extreme weather incidents than any other country. States bordering the Gulf of Mexico are prone to hurricanes, and most of the world's tornadoes occur in the country, mainly in Tornado Alley. Due to climate change in the country, extreme weather has become more frequent in the U.S. in the 21st century, with three times the number of reported heat waves compared to the 1960s. Since the 1990s, droughts in the American Southwest have become more persistent and more severe. The regions considered as the most attractive to the population are the most vulnerable. The U.S. is one of 17 megadiverse countries containing large numbers of endemic species: about 17,000 species of vascular plants occur in the contiguous United States and Alaska, and over 1,800 species of flowering plants are found in Hawaii, few of which occur on the mainland. The United States is home to 428 mammal species, 784 birds, 311 reptiles, 295 amphibians, and around 91,000 insect species. There are 63 national parks, and hundreds of other federally managed monuments, forests, and wilderness areas, administered by the National Park Service and other agencies. About 28% of the country's land is publicly owned and federally managed, primarily in the Western States. Most of this land is protected, though some is leased for commercial use, and less than one percent is used for military purposes. Environmental issues in the United States include debates on non-renewable resources and nuclear energy, air and water pollution, biodiversity, logging and deforestation, and climate change. The U.S. Environmental Protection Agency (EPA) is the federal agency charged with addressing most environmental-related issues. The idea of wilderness has shaped the management of public lands since 1964, with the Wilderness Act. The Endangered Species Act of 1973 provides a way to protect threatened and endangered species and their habitats. The United States Fish and Wildlife Service implements and enforces the Act. In 2024, the U.S. ranked 35th among 180 countries in the Environmental Performance Index. Government and politics The United States is a federal republic of 50 states and a federal capital district, Washington, D.C. The U.S. asserts sovereignty over five unincorporated territories and several uninhabited island possessions. It is the world's oldest surviving federation, and its presidential system of federal government has been adopted, in whole or in part, by many newly independent states worldwide following their decolonization. The Constitution of the United States serves as the country's supreme legal document. Most scholars describe the United States as a liberal democracy.[r] Composed of three branches, all headquartered in Washington, D.C., the federal government is the national government of the United States. The U.S. Constitution establishes a separation of powers intended to provide a system of checks and balances to prevent any of the three branches from becoming supreme. The three-branch system is known as the presidential system, in contrast to the parliamentary system where the executive is part of the legislative body. Many countries around the world adopted this aspect of the 1789 Constitution of the United States, especially in the postcolonial Americas. In the U.S. federal system, sovereign powers are shared between three levels of government specified in the Constitution: the federal government, the states, and Indian tribes. The U.S. also asserts sovereignty over five permanently inhabited territories: American Samoa, Guam, the Northern Mariana Islands, Puerto Rico, and the U.S. Virgin Islands. Residents of the 50 states are governed by their elected state government, under state constitutions compatible with the national constitution, and by elected local governments that are administrative divisions of a state. States are subdivided into counties or county equivalents, and (except for Hawaii) further divided into municipalities, each administered by elected representatives. The District of Columbia is a federal district containing the U.S. capital, Washington, D.C. The federal district is an administrative division of the federal government. Indian country is made up of 574 federally recognized tribes and 326 Indian reservations. They hold a government-to-government relationship with the U.S. federal government in Washington and are legally defined as domestic dependent nations with inherent tribal sovereignty rights. In addition to the five major territories, the U.S. also asserts sovereignty over the United States Minor Outlying Islands in the Pacific Ocean and the Caribbean. The seven undisputed islands without permanent populations are Baker Island, Howland Island, Jarvis Island, Johnston Atoll, Kingman Reef, Midway Atoll, and Palmyra Atoll. U.S. sovereignty over the unpopulated Bajo Nuevo Bank, Navassa Island, Serranilla Bank, and Wake Island is disputed. The Constitution is silent on political parties. However, they developed independently in the 18th century with the Federalist and Anti-Federalist parties. Since then, the United States has operated as a de facto two-party system, though the parties have changed over time. Since the mid-19th century, the two main national parties have been the Democratic Party and the Republican Party. The former is perceived as relatively liberal in its political platform while the latter is perceived as relatively conservative in its platform. The United States has an established structure of foreign relations, with the world's second-largest diplomatic corps as of 2024[update]. It is a permanent member of the United Nations Security Council and home to the United Nations headquarters. The United States is a member of the G7, G20, and OECD intergovernmental organizations. Almost all countries have embassies and many have consulates (official representatives) in the country. Likewise, nearly all countries host formal diplomatic missions with the United States, except Iran, North Korea, and Bhutan. Though Taiwan does not have formal diplomatic relations with the U.S., it maintains close unofficial relations. The United States regularly supplies Taiwan with military equipment to deter potential Chinese aggression. Its geopolitical attention also turned to the Indo-Pacific when the United States joined the Quadrilateral Security Dialogue with Australia, India, and Japan. The United States has a "Special Relationship" with the United Kingdom and strong ties with Canada, Australia, New Zealand, the Philippines, Japan, South Korea, Israel, and several European Union countries such as France, Italy, Germany, Spain, and Poland. The U.S. works closely with its NATO allies on military and national security issues, and with countries in the Americas through the Organization of American States and the United States–Mexico–Canada Free Trade Agreement. The U.S. exercises full international defense authority and responsibility for Micronesia, the Marshall Islands, and Palau through the Compact of Free Association. It has increasingly conducted strategic cooperation with India, while its ties with China have steadily deteriorated. Beginning in 2014, the U.S. had become a key ally of Ukraine. After Donald Trump was elected U.S. president in 2024, he sought to negotiate an end to the Russo-Ukrainian War. He paused all military aid to Ukraine in March 2025, although the aid resumed later. Trump also ended U.S. intelligence sharing with the country, but this too was eventually restored. The president is the commander-in-chief of the United States Armed Forces and appoints its leaders, the secretary of defense and the Joint Chiefs of Staff. The Department of Defense, headquartered at the Pentagon near Washington, D.C., administers five of the six service branches, which are made up of the U.S. Army, Marine Corps, Navy, Air Force, and Space Force. The Coast Guard is administered by the Department of Homeland Security in peacetime and can be transferred to the Department of the Navy in wartime. Total strength of the entire military is about 1.3 million active duty with an additional 400,000 in reserve. The United States spent $997 billion on its military in 2024, which is by far the largest amount of any country, making up 37% of global military spending and accounting for 3.4% of the country's GDP. The U.S. possesses 42% of the world's nuclear weapons—the second-largest stockpile after that of Russia. The U.S. military is widely regarded as the most powerful and advanced in the world. The United States has the third-largest combined armed forces in the world, behind the Chinese People's Liberation Army and Indian Armed Forces. The U.S. military operates about 800 bases and facilities abroad, and maintains deployments greater than 100 active duty personnel in 25 foreign countries. The United States has engaged in over 400 military interventions since its founding in 1776, with over half of these occurring between 1950 and 2019 and 25% occurring in the post-Cold War era. State defense forces (SDFs) are military units that operate under the sole authority of a state government. SDFs are authorized by state and federal law but are under the command of the state's governor. By contrast, the 54 U.S. National Guard organizations[t] fall under the dual control of state or territorial governments and the federal government; their units can also become federalized entities, but SDFs cannot be federalized. The National Guard personnel of a state or territory can be federalized by the president under the National Defense Act Amendments of 1933; this legislation created the Guard and provides for the integration of Army National Guard and Air National Guard units and personnel into the U.S. Army and (since 1947) the U.S. Air Force. The total number of National Guard members is about 430,000, while the estimated combined strength of SDFs is less than 10,000. There are about 18,000 U.S. police agencies from local to national level in the United States. Law in the United States is mainly enforced by local police departments and sheriff departments in their municipal or county jurisdictions. The state police departments have authority in their respective state, and federal agencies such as the Federal Bureau of Investigation (FBI) and the U.S. Marshals Service have national jurisdiction and specialized duties, such as protecting civil rights, national security, enforcing U.S. federal courts' rulings and federal laws, and interstate criminal activity. State courts conduct almost all civil and criminal trials, while federal courts adjudicate the much smaller number of civil and criminal cases that relate to federal law. There is no unified "criminal justice system" in the United States. The American prison system is largely heterogenous, with thousands of relatively independent systems operating across federal, state, local, and tribal levels. In 2025, "these systems hold nearly 2 million people in 1,566 state prisons, 98 federal prisons, 3,116 local jails, 1,277 juvenile correctional facilities, 133 immigration detention facilities, and 80 Indian country jails, as well as in military prisons, civil commitment centers, state psychiatric hospitals, and prisons in the U.S. territories." Despite disparate systems of confinement, four main institutions dominate: federal prisons, state prisons, local jails, and juvenile correctional facilities. Federal prisons are run by the Federal Bureau of Prisons and hold pretrial detainees as well as people who have been convicted of federal crimes. State prisons, run by the department of corrections of each state, hold people sentenced and serving prison time (usually longer than one year) for felony offenses. Local jails are county or municipal facilities that incarcerate defendants prior to trial; they also hold those serving short sentences (typically under a year). Juvenile correctional facilities are operated by local or state governments and serve as longer-term placements for any minor adjudicated as delinquent and ordered by a judge to be confined. In January 2023, the United States had the sixth-highest per capita incarceration rate in the world—531 people per 100,000 inhabitants—and the largest prison and jail population in the world, with more than 1.9 million people incarcerated. An analysis of the World Health Organization Mortality Database from 2010 showed U.S. homicide rates "were 7 times higher than in other high-income countries, driven by a gun homicide rate that was 25 times higher". Economy The U.S. has a highly developed mixed economy that has been the world's largest nominally since about 1890. Its 2024 gross domestic product (GDP)[e] of more than $29 trillion constituted over 25% of nominal global economic output, or 15% at purchasing power parity (PPP). From 1983 to 2008, U.S. real compounded annual GDP growth was 3.3%, compared to a 2.3% weighted average for the rest of the G7. The country ranks first in the world by nominal GDP, second when adjusted for purchasing power parities (PPP), and ninth by PPP-adjusted GDP per capita. In February 2024, the total U.S. federal government debt was $34.4 trillion. Of the world's 500 largest companies by revenue, 138 were headquartered in the U.S. in 2025, the highest number of any country. The U.S. dollar is the currency most used in international transactions and the world's foremost reserve currency, backed by the country's dominant economy, its military, the petrodollar system, its large U.S. treasuries market, and its linked eurodollar. Several countries use it as their official currency, and in others it is the de facto currency. The U.S. has free trade agreements with several countries, including the USMCA. Although the United States has reached a post-industrial level of economic development and is often described as having a service economy, it remains a major industrial power; in 2024, the U.S. manufacturing sector was the world's second-largest by value output after China's. New York City is the world's principal financial center, and its metropolitan area is the world's largest metropolitan economy. The New York Stock Exchange and Nasdaq, both located in New York City, are the world's two largest stock exchanges by market capitalization and trade volume. The United States is at the forefront of technological advancement and innovation in many economic fields, especially in artificial intelligence; electronics and computers; pharmaceuticals; and medical, aerospace and military equipment. The country's economy is fueled by abundant natural resources, a well-developed infrastructure, and high productivity. The largest trading partners of the United States are the European Union, Mexico, Canada, China, Japan, South Korea, the United Kingdom, Vietnam, India, and Taiwan. The United States is the world's largest importer and second-largest exporter.[u] It is by far the world's largest exporter of services. Americans have the highest average household and employee income among OECD member states, and the fourth-highest median household income in 2023, up from sixth-highest in 2013. With personal consumption expenditures of over $18.5 trillion in 2023, the U.S. has a heavily consumer-driven economy and is the world's largest consumer market. The U.S. ranked first in the number of dollar billionaires and millionaires in 2023, with 735 billionaires and nearly 22 million millionaires. Wealth in the United States is highly concentrated; in 2011, the richest 10% of the adult population owned 72% of the country's household wealth, while the bottom 50% owned just 2%. U.S. wealth inequality increased substantially since the late 1980s, and income inequality in the U.S. reached a record high in 2019. In 2024, the country had some of the highest wealth and income inequality levels among OECD countries. Since the 1970s, there has been a decoupling of U.S. wage gains from worker productivity. In 2016, the top fifth of earners took home more than half of all income, giving the U.S. one of the widest income distributions among OECD countries. There were about 771,480 homeless persons in the U.S. in 2024. In 2022, 6.4 million children experienced food insecurity. Feeding America estimates that around one in five, or approximately 13 million, children experience hunger in the U.S. and do not know where or when they will get their next meal. Also in 2022, about 37.9 million people, or 11.5% of the U.S. population, were living in poverty. The United States has a smaller welfare state and redistributes less income through government action than most other high-income countries. It is the only advanced economy that does not guarantee its workers paid vacation nationally and one of a few countries in the world without federal paid family leave as a legal right. The United States has a higher percentage of low-income workers than almost any other developed country, largely because of a weak collective bargaining system and lack of government support for at-risk workers. The United States has been a leader in technological innovation since the late 19th century and scientific research since the mid-20th century. Methods for producing interchangeable parts and the establishment of a machine tool industry enabled the large-scale manufacturing of U.S. consumer products in the late 19th century. By the early 20th century, factory electrification, the introduction of the assembly line, and other labor-saving techniques created the system of mass production. In the 21st century, the United States continues to be one of the world's foremost scientific powers, though China has emerged as a major competitor in many fields. The U.S. has the highest research and development expenditures of any country and ranks ninth as a percentage of GDP. In 2022, the United States was (after China) the country with the second-highest number of published scientific papers. In 2021, the U.S. ranked second (also after China) by the number of patent applications, and third by trademark and industrial design applications (after China and Germany), according to World Intellectual Property Indicators. In 2025 the United States ranked third (after Switzerland and Sweden) in the Global Innovation Index. The United States is considered to be a world leader in the development of artificial intelligence technology. In 2023, the United States was ranked the second most technologically advanced country in the world (after South Korea) by Global Finance magazine. The United States has maintained a space program since the late 1950s, beginning with the establishment of the National Aeronautics and Space Administration (NASA) in 1958. NASA's Apollo program (1961–1972) achieved the first crewed Moon landing with the 1969 Apollo 11 mission; it remains one of the agency's most significant milestones. Other major endeavors by NASA include the Space Shuttle program (1981–2011), the Voyager program (1972–present), the Hubble and James Webb space telescopes (launched in 1990 and 2021, respectively), and the multi-mission Mars Exploration Program (Spirit and Opportunity, Curiosity, and Perseverance). NASA is one of five agencies collaborating on the International Space Station (ISS); U.S. contributions to the ISS include several modules, including Destiny (2001), Harmony (2007), and Tranquility (2010), as well as ongoing logistical and operational support. The United States private sector dominates the global commercial spaceflight industry. Prominent American spaceflight contractors include Blue Origin, Boeing, Lockheed Martin, Northrop Grumman, and SpaceX. NASA programs such as the Commercial Crew Program, Commercial Resupply Services, Commercial Lunar Payload Services, and NextSTEP have facilitated growing private-sector involvement in American spaceflight. In 2023, the United States received approximately 84% of its energy from fossil fuel, and its largest source of energy was petroleum (38%), followed by natural gas (36%), renewable sources (9%), coal (9%), and nuclear power (9%). In 2022, the United States constituted about 4% of the world's population, but consumed around 16% of the world's energy. The U.S. ranks as the second-highest emitter of greenhouse gases behind China. The U.S. is the world's largest producer of nuclear power, generating around 30% of the world's nuclear electricity. It also has the highest number of nuclear power reactors of any country. From 2024, the U.S. plans to triple its nuclear power capacity by 2050. The United States' 4 million miles (6.4 million kilometers) of road network, owned almost entirely by state and local governments, is the longest in the world. The extensive Interstate Highway System that connects all major U.S. cities is funded mostly by the federal government but maintained by state departments of transportation. The system is further extended by state highways and some private toll roads. The U.S. is among the top ten countries with the highest vehicle ownership per capita (850 vehicles per 1,000 people) in 2022. A 2022 study found that 76% of U.S. commuters drive alone and 14% ride a bicycle, including bike owners and users of bike-sharing networks. About 11% use some form of public transportation. Public transportation in the United States is well developed in the largest urban areas, notably New York City, Washington, D.C., Boston, Philadelphia, Chicago, and San Francisco; otherwise, coverage is generally less extensive than in most other developed countries. The U.S. also has many relatively car-dependent localities. Long-distance intercity travel is provided primarily by airlines, but travel by rail is more common along the Northeast Corridor, the only high-speed rail in the U.S. that meets international standards. Amtrak, the country's government-sponsored national passenger rail company, has a relatively sparse network compared to that of Western European countries. Service is concentrated in the Northeast, California, the Midwest, the Pacific Northwest, and Virginia/Southeast. The United States has an extensive air transportation network. U.S. civilian airlines are all privately owned. The three largest airlines in the world, by total number of passengers carried, are U.S.-based; American Airlines became the global leader after its 2013 merger with US Airways. Of the 50 busiest airports in the world, 16 are in the United States, as well as five of the top 10. The world's busiest airport by passenger volume is Hartsfield–Jackson Atlanta International in Atlanta, Georgia. In 2022, most of the 19,969 U.S. airports were owned and operated by local government authorities, and there are also some private airports. Some 5,193 are designated as "public use", including for general aviation. The Transportation Security Administration (TSA) has provided security at most major airports since 2001. The country's rail transport network, the longest in the world at 182,412.3 mi (293,564.2 km), handles mostly freight (in contrast to more passenger-centered rail in Europe). Because they are often privately owned operations, U.S. railroads lag behind those of the rest of the world in terms of electrification. The country's inland waterways are the world's fifth-longest, totaling 25,482 mi (41,009 km). They are used extensively for freight, recreation, and a small amount of passenger traffic. Of the world's 50 busiest container ports, four are located in the United States, with the busiest in the country being the Port of Los Angeles. Demographics The U.S. Census Bureau reported 331,449,281 residents on April 1, 2020,[v] making the United States the third-most-populous country in the world, after India and China. The Census Bureau's official 2025 population estimate was 341,784,857, an increase of 3.1% since the 2020 census. According to the Bureau's U.S. Population Clock, on July 1, 2024, the U.S. population had a net gain of one person every 16 seconds, or about 5400 people per day. In 2023, 51% of Americans age 15 and over were married, 6% were widowed, 10% were divorced, and 34% had never been married. In 2023, the total fertility rate for the U.S. stood at 1.6 children per woman, and, at 23%, it had the world's highest rate of children living in single-parent households in 2019. Most Americans live in the suburbs of major metropolitan areas. The United States has a diverse population; 37 ancestry groups have more than one million members. White Americans with ancestry from Europe, the Middle East, or North Africa form the largest racial and ethnic group at 57.8% of the United States population. Hispanic and Latino Americans form the second-largest group and are 18.7% of the United States population. African Americans constitute the country's third-largest ancestry group and are 12.1% of the total U.S. population. Asian Americans are the country's fourth-largest group, composing 5.9% of the United States population. The country's 3.7 million Native Americans account for about 1%, and some 574 native tribes are recognized by the federal government. In 2024, the median age of the United States population was 39.1 years. While many languages and dialects are spoken in the United States, English is by far the most commonly spoken and written. De facto, English is the official language of the United States, and in 2025, Executive Order 14224 declared English official. However, the U.S. has never had a de jure official language, as Congress has never passed a law to designate English as official for all three federal branches. Some laws, such as U.S. naturalization requirements, nonetheless standardize English. Twenty-eight states and the United States Virgin Islands have laws that designate English as the sole official language; 19 states and the District of Columbia have no official language. Three states and four U.S. territories have recognized local or indigenous languages in addition to English: Hawaii (Hawaiian), Alaska (twenty Native languages),[w] South Dakota (Sioux), American Samoa (Samoan), Puerto Rico (Spanish), Guam (Chamorro), and the Northern Mariana Islands (Carolinian and Chamorro). In total, 169 Native American languages are spoken in the United States. In Puerto Rico, Spanish is more widely spoken than English. According to the American Community Survey (2020), some 245.4 million people in the U.S. age five and older spoke only English at home. About 41.2 million spoke Spanish at home, making it the second most commonly used language. Other languages spoken at home by one million people or more include Chinese (3.40 million), Tagalog (1.71 million), Vietnamese (1.52 million), Arabic (1.39 million), French (1.18 million), Korean (1.07 million), and Russian (1.04 million). German, spoken by 1 million people at home in 2010, fell to 857,000 total speakers in 2020. America's immigrant population is by far the world's largest in absolute terms. In 2022, there were 87.7 million immigrants and U.S.-born children of immigrants in the United States, accounting for nearly 27% of the overall U.S. population. In 2017, out of the U.S. foreign-born population, some 45% (20.7 million) were naturalized citizens, 27% (12.3 million) were lawful permanent residents, 6% (2.2 million) were temporary lawful residents, and 23% (10.5 million) were unauthorized immigrants. In 2019, the top countries of origin for immigrants were Mexico (24% of immigrants), India (6%), China (5%), the Philippines (4.5%), and El Salvador (3%). In fiscal year 2022, over one million immigrants (most of whom entered through family reunification) were granted legal residence. The undocumented immigrant population in the U.S. reached a record high of 14 million in 2023. The First Amendment guarantees the free exercise of religion in the country and forbids Congress from passing laws respecting its establishment. Religious practice is widespread, among the most diverse in the world, and profoundly vibrant. The country has the world's largest Christian population, which includes the fourth-largest population of Catholics. Other notable faiths include Judaism, Buddhism, Hinduism, Islam, New Age, and Native American religions. Religious practice varies significantly by region. "Ceremonial deism" is common in American culture. The overwhelming majority of Americans believe in a higher power or spiritual force, engage in spiritual practices such as prayer, and consider themselves religious or spiritual. In the Southern United States' "Bible Belt", evangelical Protestantism plays a significant role culturally; New England and the Western United States tend to be more secular. Mormonism, a Restorationist movement founded in the U.S. in 1847, is the predominant religion in Utah and a major religion in Idaho. About 82% of Americans live in metropolitan areas, particularly in suburbs; about half of those reside in cities with populations over 50,000. In 2022, 333 incorporated municipalities had populations over 100,000, nine cities had more than one million residents, and four cities—New York City, Los Angeles, Chicago, and Houston—had populations exceeding two million. Many U.S. metropolitan populations are growing rapidly, particularly in the South and West. According to the Centers for Disease Control and Prevention (CDC), average U.S. life expectancy at birth reached 79.0 years in 2024, its highest recorded level. This was an increase of 0.6 years over 2023. The CDC attributed the improvement to a significant fall in the number of fatal drug overdoses in the country, noting that "heart disease continues to be the leading cause of death in the United States, followed by cancer and unintentional injuries." In 2024, life expectancy at birth for American men rose to 76.5 years (+0.7 years compared to 2023), while life expectancy for women was 81.4 years (+0.3 years). Starting in 1998, life expectancy in the U.S. fell behind that of other wealthy industrialized countries, and Americans' "health disadvantage" gap has been increasing ever since. The Commonwealth Fund reported in 2020 that the U.S. had the highest suicide rate among high-income countries. Approximately one-third of the U.S. adult population is obese and another third is overweight. The U.S. healthcare system far outspends that of any other country, measured both in per capita spending and as a percentage of GDP, but attains worse healthcare outcomes when compared to peer countries for reasons that are debated. The United States is the only developed country without a system of universal healthcare, and a significant proportion of the population that does not carry health insurance. Government-funded healthcare coverage for the poor (Medicaid) and for those age 65 and older (Medicare) is available to Americans who meet the programs' income or age qualifications. In 2010, then-President Obama passed the Patient Protection and Affordable Care Act.[x] Abortion in the United States is not federally protected, and is illegal or restricted in 17 states. American primary and secondary education, known in the U.S. as K–12 ("kindergarten through 12th grade"), is decentralized. School systems are operated by state, territorial, and sometimes municipal governments and regulated by the U.S. Department of Education. In general, children are required to attend school or an approved homeschool from the age of five or six (kindergarten or first grade) until they are 18 years old. This often brings students through the 12th grade, the final year of a U.S. high school, but some states and territories allow them to leave school earlier, at age 16 or 17. The U.S. spends more on education per student than any other country, an average of $18,614 per year per public elementary and secondary school student in 2020–2021. Among Americans age 25 and older, 92.2% graduated from high school, 62.7% attended some college, 37.7% earned a bachelor's degree, and 14.2% earned a graduate degree. The U.S. literacy rate is near-universal. The U.S. has produced the most Nobel Prize winners of any country, with 411 (having won 413 awards). U.S. tertiary or higher education has earned a global reputation. Many of the world's top universities, as listed by various ranking organizations, are in the United States, including 19 of the top 25. American higher education is dominated by state university systems, although the country's many private universities and colleges enroll about 20% of all American students. Local community colleges generally offer open admissions, lower tuition, and coursework leading to a two-year associate degree or a non-degree certificate. As for public expenditures on higher education, the U.S. spends more per student than the OECD average, and Americans spend more than all nations in combined public and private spending. Colleges and universities directly funded by the federal government do not charge tuition and are limited to military personnel and government employees, including: the U.S. service academies, the Naval Postgraduate School, and military staff colleges. Despite some student loan forgiveness programs in place, student loan debt increased by 102% between 2010 and 2020, and exceeded $1.7 trillion in 2022. Culture and society The United States is home to a wide variety of ethnic groups, traditions, and customs. The country has been described as having the values of individualism and personal autonomy, as well as a strong work ethic and competitiveness. Voluntary altruism towards others also plays a major role; according to a 2016 study by the Charities Aid Foundation, Americans donated 1.44% of total GDP to charity—the highest rate in the world by a large margin. Americans have traditionally been characterized by a unifying political belief in an "American Creed" emphasizing consent of the governed, liberty, equality under the law, democracy, social equality, property rights, and a preference for limited government. The U.S. has acquired significant hard and soft power through its diplomatic influence, economic power, military alliances, and cultural exports such as American movies, music, video games, sports, and food. The influence that the United States exerts on other countries through soft power is referred to as Americanization. Nearly all present Americans or their ancestors came from Europe, Africa, or Asia (the "Old World") within the past five centuries. Mainstream American culture is a Western culture largely derived from the traditions of European immigrants with influences from many other sources, such as traditions brought by slaves from Africa. More recent immigration from Asia and especially Latin America has added to a cultural mix that has been described as a homogenizing melting pot, and a heterogeneous salad bowl, with immigrants contributing to, and often assimilating into, mainstream American culture. Under the First Amendment to the Constitution, the United States is considered to have the strongest protections of free speech of any country. Flag desecration, hate speech, blasphemy, and lese majesty are all forms of protected expression. A 2016 Pew Research Center poll found that Americans were the most supportive of free expression of any polity measured. Additionally, they are the "most supportive of freedom of the press and the right to use the Internet without government censorship". The U.S. is a socially progressive country with permissive attitudes surrounding human sexuality. LGBTQ rights in the United States are among the most advanced by global standards. The American Dream, or the perception that Americans enjoy high levels of social mobility, plays a key role in attracting immigrants. Whether this perception is accurate has been a topic of debate. While mainstream culture holds that the United States is a classless society, scholars identify significant differences between the country's social classes, affecting socialization, language, and values. Americans tend to greatly value socioeconomic achievement, but being ordinary or average is promoted by some as a noble condition as well. The National Foundation on the Arts and the Humanities is an agency of the United States federal government that was established in 1965 with the purpose to "develop and promote a broadly conceived national policy of support for the humanities and the arts in the United States, and for institutions which preserve the cultural heritage of the United States." It is composed of four sub-agencies: Colonial American authors were influenced by John Locke and other Enlightenment philosophers. The American Revolutionary Period (1765–1783) is notable for the political writings of Benjamin Franklin, Alexander Hamilton, Thomas Paine, and Thomas Jefferson. Shortly before and after the Revolutionary War, the newspaper rose to prominence, filling a demand for anti-British national literature. An early novel is William Hill Brown's The Power of Sympathy, published in 1791. Writer and critic John Neal in the early- to mid-19th century helped advance America toward a unique literature and culture by criticizing predecessors such as Washington Irving for imitating their British counterparts, and by influencing writers such as Edgar Allan Poe, who took American poetry and short fiction in new directions. Ralph Waldo Emerson and Margaret Fuller pioneered the influential Transcendentalism movement; Henry David Thoreau, author of Walden, was influenced by this movement. The conflict surrounding abolitionism inspired writers, like Harriet Beecher Stowe, and authors of slave narratives, such as Frederick Douglass. Nathaniel Hawthorne's The Scarlet Letter (1850) explored the dark side of American history, as did Herman Melville's Moby-Dick (1851). Major American poets of the 19th century American Renaissance include Walt Whitman, Melville, and Emily Dickinson. Mark Twain was the first major American writer to be born in the West. Henry James achieved international recognition with novels like The Portrait of a Lady (1881). As literacy rates rose, periodicals published more stories centered around industrial workers, women, and the rural poor. Naturalism, regionalism, and realism were the major literary movements of the period. While modernism generally took on an international character, modernist authors working within the United States more often rooted their work in specific regions, peoples, and cultures. Following the Great Migration to northern cities, African-American and black West Indian authors of the Harlem Renaissance developed an independent tradition of literature that rebuked a history of inequality and celebrated black culture. An important cultural export during the Jazz Age, these writings were a key influence on Négritude, a philosophy emerging in the 1930s among francophone writers of the African diaspora. In the 1950s, an ideal of homogeneity led many authors to attempt to write the Great American Novel, while the Beat Generation rejected this conformity, using styles that elevated the impact of the spoken word over mechanics to describe drug use, sexuality, and the failings of society. Contemporary literature is more pluralistic than in previous eras, with the closest thing to a unifying feature being a trend toward self-conscious experiments with language. Twelve American laureates have won the Nobel Prize in Literature. Media in the United States is broadly uncensored, with the First Amendment providing significant protections, as reiterated in New York Times Co. v. United States. The four major broadcasters in the U.S. are the National Broadcasting Company (NBC), Columbia Broadcasting System (CBS), American Broadcasting Company (ABC), and Fox Broadcasting Company (Fox). The four major broadcast television networks are all commercial entities. The U.S. cable television system offers hundreds of channels catering to a variety of niches. In 2021, about 83% of Americans over age 12 listened to broadcast radio, while about 40% listened to podcasts. In the prior year, there were 15,460 licensed full-power radio stations in the U.S. according to the Federal Communications Commission (FCC). Much of the public radio broadcasting is supplied by National Public Radio (NPR), incorporated in February 1970 under the Public Broadcasting Act of 1967. U.S. newspapers with a global reach and reputation include The Wall Street Journal, The New York Times, The Washington Post, and USA Today. About 800 publications are produced in Spanish. With few exceptions, newspapers are privately owned, either by large chains such as Gannett or McClatchy, which own dozens or even hundreds of newspapers; by small chains that own a handful of papers; or, in an increasingly rare situation, by individuals or families. Major cities often have alternative newspapers to complement the mainstream daily papers, such as The Village Voice in New York City and LA Weekly in Los Angeles. The five most-visited websites in the world are Google, YouTube, Facebook, Instagram, and ChatGPT—all of them American-owned. Other popular platforms used include X (formerly Twitter) and Amazon. In 2025, the U.S. was the world's second-largest video game market by revenue (after China). In 2015, the U.S. video game industry consisted of 2,457 companies that employed around 220,000 jobs and generated $30.4 billion in revenue. There are 444 game publishers, developers, and hardware companies in California alone. According to the Game Developers Conference (GDC), the U.S. is the top location for video game development, with 58% of the world's game developers based there in 2025. The United States is well known for its theater. Mainstream theater in the United States derives from the old European theatrical tradition and has been heavily influenced by the British theater. By the middle of the 19th century, America had created new distinct dramatic forms in the Tom Shows, the showboat theater and the minstrel show. The central hub of the American theater scene is the Theater District in Manhattan, with its divisions of Broadway, off-Broadway, and off-off-Broadway. Many movie and television celebrities have gotten their big break working in New York productions. Outside New York City, many cities have professional regional or resident theater companies that produce their own seasons. The biggest-budget theatrical productions are musicals. U.S. theater has an active community theater culture. The Tony Awards recognizes excellence in live Broadway theater and are presented at an annual ceremony in Manhattan. The awards are given for Broadway productions and performances. One is also given for regional theater. Several discretionary non-competitive awards are given as well, including a Special Tony Award, the Tony Honors for Excellence in Theatre, and the Isabelle Stevenson Award. Folk art in colonial America grew out of artisanal craftsmanship in communities that allowed commonly trained people to individually express themselves. It was distinct from Europe's tradition of high art, which was less accessible and generally less relevant to early American settlers. Cultural movements in art and craftsmanship in colonial America generally lagged behind those of Western Europe. For example, the prevailing medieval style of woodworking and primitive sculpture became integral to early American folk art, despite the emergence of Renaissance styles in England in the late 16th and early 17th centuries. The new English styles would have been early enough to make a considerable impact on American folk art, but American styles and forms had already been firmly adopted. Not only did styles change slowly in early America, but there was a tendency for rural artisans there to continue their traditional forms longer than their urban counterparts did—and far longer than those in Western Europe. The Hudson River School was a mid-19th-century movement in the visual arts tradition of European naturalism. The 1913 Armory Show in New York City, an exhibition of European modernist art, shocked the public and transformed the U.S. art scene. American Realism and American Regionalism sought to reflect and give America new ways of looking at itself. Georgia O'Keeffe, Marsden Hartley, and others experimented with new and individualistic styles, which would become known as American modernism. Major artistic movements such as the abstract expressionism of Jackson Pollock and Willem de Kooning and the pop art of Andy Warhol and Roy Lichtenstein developed largely in the United States. Major photographers include Alfred Stieglitz, Edward Steichen, Dorothea Lange, Edward Weston, James Van Der Zee, Ansel Adams, and Gordon Parks. The tide of modernism and then postmodernism has brought global fame to American architects, including Frank Lloyd Wright, Philip Johnson, and Frank Gehry. The Metropolitan Museum of Art in Manhattan is the largest art museum in the United States and the fourth-largest in the world. American folk music encompasses numerous music genres, variously known as traditional music, traditional folk music, contemporary folk music, or roots music. Many traditional songs have been sung within the same family or folk group for generations, and sometimes trace back to such origins as the British Isles, mainland Europe, or Africa. The rhythmic and lyrical styles of African-American music in particular have influenced American music. Banjos were brought to America through the slave trade. Minstrel shows incorporating the instrument into their acts led to its increased popularity and widespread production in the 19th century. The electric guitar, first invented in the 1930s, and mass-produced by the 1940s, had an enormous influence on popular music, in particular due to the development of rock and roll. The synthesizer, turntablism, and electronic music were also largely developed in the U.S. Elements from folk idioms such as the blues and old-time music were adopted and transformed into popular genres with global audiences. Jazz grew from blues and ragtime in the early 20th century, developing from the innovations and recordings of composers such as W.C. Handy and Jelly Roll Morton. Louis Armstrong and Duke Ellington increased its popularity early in the 20th century. Country music developed in the 1920s, bluegrass and rhythm and blues in the 1940s, and rock and roll in the 1950s. In the 1960s, Bob Dylan emerged from the folk revival to become one of the country's most celebrated songwriters. The musical forms of punk and hip hop both originated in the United States in the 1970s. The United States has the world's largest music market, with a total retail value of $15.9 billion in 2022. Most of the world's major record companies are based in the U.S.; they are represented by the Recording Industry Association of America (RIAA). Mid-20th-century American pop stars, such as Frank Sinatra and Elvis Presley, became global celebrities and best-selling music artists, as have artists of the late 20th century, such as Michael Jackson, Madonna, Whitney Houston, and Mariah Carey, and of the early 21st century, such as Eminem, Britney Spears, Lady Gaga, Katy Perry, Taylor Swift and Beyoncé. The United States has the world's largest apparel market by revenue. Apart from professional business attire, American fashion is eclectic and predominantly informal. Americans' diverse cultural roots are reflected in their clothing; however, sneakers, jeans, T-shirts, and baseball caps are emblematic of American styles. New York, with its Fashion Week, is considered to be one of the "Big Four" global fashion capitals, along with Paris, Milan, and London. A study demonstrated that general proximity to Manhattan's Garment District has been synonymous with American fashion since its inception in the early 20th century. A number of well-known designer labels, among them Tommy Hilfiger, Ralph Lauren, Tom Ford and Calvin Klein, are headquartered in Manhattan. Labels cater to niche markets, such as preteens. New York Fashion Week is one of the most influential fashion shows in the world, and is held twice each year in Manhattan; the annual Met Gala, also in Manhattan, has been called the fashion world's "biggest night". The U.S. film industry has a worldwide influence and following. Hollywood, a district in central Los Angeles, the nation's second-most populous city, is also metonymous for the American filmmaking industry. The major film studios of the United States are the primary source of the most commercially successful movies selling the most tickets in the world. Largely centered in the New York City region from its beginnings in the late 19th century through the first decades of the 20th century, the U.S. film industry has since been primarily based in and around Hollywood. Nonetheless, American film companies have been subject to the forces of globalization in the 21st century, and an increasing number of films are made elsewhere. The Academy Awards, popularly known as "the Oscars", have been held annually by the Academy of Motion Picture Arts and Sciences since 1929, and the Golden Globe Awards have been held annually since January 1944. The industry peaked in what is commonly referred to as the "Golden Age of Hollywood", from the early sound period until the early 1960s, with screen actors such as John Wayne and Marilyn Monroe becoming iconic figures. In the 1970s, "New Hollywood", or the "Hollywood Renaissance", was defined by grittier films influenced by French and Italian realist pictures of the post-war period. The 21st century has been marked by the rise of American streaming platforms, which came to rival traditional cinema. Early settlers were introduced by Native Americans to foods such as turkey, sweet potatoes, corn, squash, and maple syrup. Of the most enduring and pervasive examples are variations of the native dish called succotash. Early settlers and later immigrants combined these with foods they were familiar with, such as wheat flour, beef, and milk, to create a distinctive American cuisine. New World crops, especially pumpkin, corn, potatoes, and turkey as the main course are part of a shared national menu on Thanksgiving, when many Americans prepare or purchase traditional dishes to celebrate the occasion. Characteristic American dishes such as apple pie, fried chicken, doughnuts, french fries, macaroni and cheese, ice cream, hamburgers, hot dogs, and American pizza derive from the recipes of various immigrant groups. Mexican dishes such as burritos and tacos preexisted the United States in areas later annexed from Mexico, and adaptations of Chinese cuisine as well as pasta dishes freely adapted from Italian sources are all widely consumed. American chefs have had a significant impact on society both domestically and internationally. In 1946, the Culinary Institute of America was founded by Katharine Angell and Frances Roth. This would become the United States' most prestigious culinary school, where many of the most talented American chefs would study prior to successful careers. The United States restaurant industry was projected at $899 billion in sales for 2020, and employed more than 15 million people, representing 10% of the nation's workforce directly. It is the country's second-largest private employer and the third-largest employer overall. The United States is home to over 220 Michelin star-rated restaurants, 70 of which are in New York City. Wine has been produced in what is now the United States since the 1500s, with the first widespread production beginning in what is now New Mexico in 1628. In the modern U.S., wine production is undertaken in all fifty states, with California producing 84 percent of all U.S. wine. With more than 1,100,000 acres (4,500 km2) under vine, the United States is the fourth-largest wine-producing country in the world, after Italy, Spain, and France. The classic American diner, a casual restaurant type originally intended for the working class, emerged during the 19th century from converted railroad dining cars made stationary. The diner soon evolved into purpose-built structures whose number expanded greatly in the 20th century. The American fast-food industry developed alongside the nation's car culture. American restaurants developed the drive-in format in the 1920s, which they began to replace with the drive-through format by the 1940s. American fast-food restaurant chains, such as McDonald's, Burger King, Chick-fil-A, Kentucky Fried Chicken, Dunkin' Donuts and many others, have numerous outlets around the world. The most popular spectator sports in the U.S. are American football, basketball, baseball, soccer, and ice hockey. Their premier leagues are, respectively, the National Football League, the National Basketball Association, Major League Baseball, Major League Soccer, and the National Hockey League, All these leagues enjoy wide-ranging domestic media coverage and, except for the MLS, all are considered the preeminent leagues in their respective sports in the world. While most major U.S. sports such as baseball and American football have evolved out of European practices, basketball, volleyball, skateboarding, and snowboarding are American inventions, many of which have become popular worldwide. Lacrosse and surfing arose from Native American and Native Hawaiian activities that predate European contact. The market for professional sports in the United States was approximately $69 billion in July 2013, roughly 50% larger than that of Europe, the Middle East, and Africa combined. American football is by several measures the most popular spectator sport in the United States. Although American football does not have a substantial following in other nations, the NFL does have the highest average attendance (67,254) of any professional sports league in the world. In the year 2024, the NFL generated over $23 billion, making them the most valued professional sports league in the United States and the world. Baseball has been regarded as the U.S. "national sport" since the late 19th century. The most-watched individual sports in the U.S. are golf and auto racing, particularly NASCAR and IndyCar. On the collegiate level, earnings for the member institutions exceed $1 billion annually, and college football and basketball attract large audiences, as the NCAA March Madness tournament and the College Football Playoff are some of the most watched national sporting events. In the U.S., the intercollegiate sports level serves as the main feeder system for professional and Olympic sports, with significant exceptions such as Minor League Baseball. This differs greatly from practices in nearly all other countries, where publicly and privately funded sports organizations serve this function. Eight Olympic Games have taken place in the United States. The 1904 Summer Olympics in St. Louis, Missouri, were the first-ever Olympic Games held outside of Europe. The Olympic Games will be held in the U.S. for a ninth time when Los Angeles hosts the 2028 Summer Olympics. U.S. athletes have won a total of 2,968 medals (1,179 gold) at the Olympic Games, the most of any country. In other international competition, the United States is the home of a number of prestigious events, including the America's Cup, World Baseball Classic, the U.S. Open, and the Masters Tournament. The U.S. men's national soccer team has qualified for eleven World Cups, while the women's national team has won the FIFA Women's World Cup and Olympic soccer tournament four and five times, respectively. The 1999 FIFA Women's World Cup was hosted by the United States. Its final match was attended by 90,185, setting the world record for largest women's sporting event crowd at the time. The United States hosted the 1994 FIFA World Cup and will co-host, along with Canada and Mexico, the 2026 FIFA World Cup. See also Notes References This article incorporates text from a free content work. Licensed under CC BY-SA IGO 3.0 (license statement/permission). Text taken from World Food and Agriculture – Statistical Yearbook 2023​, FAO, FAO. External links 40°N 100°W / 40°N 100°W / 40; -100 (United States of America)
========================================
[SOURCE: https://en.wikipedia.org/wiki/Template_talk:FOSS] | [TOKENS: 4358]
Contents Template talk:FOSS This Template I created this template as a result of a request at Template talk:Linux. I consider that it needs a lot of development and also that it needs to be put on a lot of pages. Please feel free to do both and let's see how we can evolve it! - Ahunt (talk) 17:13, 24 March 2008 (UTC)[reply] I think that doing away with the list was a good idea. It would have got very long very quickly. The page linked to is the most complete list of open source applications on Wikipedia. - Ahunt (talk) 23:56, 24 March 2008 (UTC)[reply] Okay - makes sense. I'll leave the portal links on the nav template for convenience, but leave the portal box on the pages, too. I did think it reduced clutter! - Ahunt (talk) 13:16, 25 March 2008 (UTC)[reply] PS - Thanks for putting the portal box into the template. That makes it a bit more obvious! I reverted the removal of the portal box on the Mozilla Firefox page, but have left it removed on some of the shorter pages, where it was close to the nav box. We'll see what other editors think. - Ahunt (talk) 13:45, 25 March 2008 (UTC)[reply] New group for compilers & interpreters? I would be tempted to add GCC and some other open source compilers, but it probably doesn't make sense to include it with the other applications. Perhaps we need a new group for Computer Languages? -- Schapel (talk) 18:35, 24 March 2008 (UTC)[reply] Template placements I think we should stick to one rule - place the template on pages only that the template lists. I think this should actually apply for all templates ever.--Kozuch (talk) 18:35, 25 March 2008 (UTC)[reply] Linux Foudation I am not sure Linux Foundation should be here, as it is mostly Linux-only oriented and is in Linux template already.--Kozuch (talk) 12:32, 26 March 2008 (UTC)[reply] Why are we linking to individual projects. While CUPS is very important, there are thousands of other *nix components and applications that are also important. Why should this template have any links to projects at all? Instead there should be one link to "important projects". Based on feedback, or lack of it. I will make the template project neutral in the future. (Ftrotter (talk) 17:24, 6 May 2008 (UTC))[reply] bsd should this template truly link "bsd?" Noone uses bsd, they use openbsd, freebsd, or netbsd. 74.13.56.168 (talk) 14:44, 31 July 2008 (UTC)[reply] Programming languages? Hi, I am puzzled with linking to some programming languages in the development section of this navbox. Even if, for example in the case of Java, the Sun's JVM is free software there are a lot of other compilers and virtual machines that aren't. And in any case, even if it has some sense linking to a particular implementation of a compilation toolchain for given programming language, the language per se cannot be considered free software or not. From my POV, it would be more interesting in that section to link to development tools like binutils, gdb, autotools, qemu... but IMO it doesn't have any sense to link to a programming language. Opinions? —surueña 16:38, 1 October 2008 (UTC)[reply] Power.org Is Power.org a FOSS Foundation, or is it just IBM related vendors and common competitors such as Sun Microsystems supporting it. Because if you look at the official site, some documents are not free to everyone, looks a bit similar to IEEE. --Ramu50 (talk) 03:56, 10 November 2008 (UTC)[reply] Sentence case for the title The example given on template:navbox/doc for the title field is given in sentence case, and this appears to be the predominant form used on other templates. It also conveniently allows us to bypass a redirect without having to pipe the link. Therefore, the title of this template should be in sentence case and not title case. Chris Cunningham (not at work) - talk 09:29, 13 November 2008 (UTC)[reply] Object Management Group Is Object Management Group open source? --Ramu50 (talk) 23:43, 13 November 2008 (UTC)[reply] APEX APEX stand for AGEIA Adaptive Physics EXtensions (APEX) Development Platform, I believe this is a physics library for Aegia platform. This is probably similar to a lot of the Flash Engine that is trying to accomplish physics implementations. --Ramu50 (talk) 22:45, 13 November 2008 (UTC)[reply] Inappropriate links There seems to be a number of inappropriate links appearing in this navbox recently, for instance, Template:Web browsers, ECMAScript and Template:Layout engines - none of these topics are intrinsically related to free/open-source software. Also, I'm dubious about the practice of linking to (as opposed to transcluding) navbox templates. Letdorf (talk) 11:36, 17 November 2008 (UTC).[reply] I am reverting the browser section, only Mozilla is part of the History is entirely WP:OR, KDE involement in its own engines, browsers are also part of the FOSS history. Just because Mozilla own majority of the market, that doesn't make other browsers non-notable. Not reverting the Unix printing. --Ramu50 (talk) 01:32, 25 November 2008 (UTC)[reply] But for the least ECMAScript should be included, it is an important history of FOSS. I will see if there are better article to represent Open Source Browser History. --Ramu50 (talk) 01:03, 26 November 2008 (UTC)[reply] OpenGL and Y!OS Isn't OpenGL open source. Only The Khronous Group is non-FOSS organization. Because it wasn't I don't think Xgl architecture would be allowed in all Linux OS. I really don't see why Y!OS can't be there if Google Code is allowed. ACML is open source for sure. AMD allows users contributions to develop on the library and a forum even exists. Sorry about SVG, didn't know it was developed by W3C before. By the way does anyone how what license is Atom and RSS registered as? --Ramu50 (talk) 01:04, 26 November 2008 (UTC)[reply] Interesting This site says http://www.sgi.com/products/software/opengl/license.html even though money is involved, but I think OpenGL is probably registered as an NPO. --Ramu50 (talk) 01:15, 26 November 2008 (UTC)[reply] The ACML you provided is just a EULA. All software installations whether open source, proprietary or non-profit must have one. However, Section 2 does state the software is free, its just AMD doesn't allow tampering with the source code. You can't modify the source is mentioned in Section 2b, i and ii. Some of the forum suggest it is dual-license GNU GPL and BSD license. --Ramu50 (talk) 20:23, 27 November 2008 (UTC)[reply] One questions should NPO be allowed on this template? --Ramu50 (talk) 20:38, 27 November 2008 (UTC)[reply] ACML is Free Software, but not Open Source true, but the template is called "FOSS," Free AND Open Source. I think your thoughts (or confusion) are as follows OR However, ACML is neither. --Ramu50 (talk) 21:41, 28 November 2008 (UTC)[reply] Do you even know how to read the FOSS abbreviations said Free / Open Source. Are you illiterate or what, not knowing the slash means "or", NOT the word "AND." Load of bullshit Original Research claims. User can choose between "Free" and "Open Source." If you guys think this article should ONLY include items that are BOTH "Free" AND "Open Source," then why didn't you think submit a consensus when I submit the numerous Organizations. I even ask for NPO and you guys didn't even said anything. --Ramu50 (talk) 00:44, 30 November 2008 (UTC)[reply] XNU? It's redundant to list both Darwin (operating system) and XNU here - the latter is the kernel component of the former. Anybody disagree? Letdorf (talk) 11:46, 29 November 2008 (UTC)[reply] Initially I was wondering should the father of a family of Distributions should be on there like Ubuntu. Because unlike other distributions Ubuntu distributions does have a project, not just targeting at the mainstream entertainment. The majority of the mainstream distributions have no specific kernel architecture changes, thus placing them would be useless. My idea was the Mach kernel itself is a family, but in comparison to other OS I don't think any other OS family besides BSD, have so many foundations supporting them. Thus I think they are XNU OS Project out there its just we don't know about them. The reason why I pointed out XNU, because it is a successful hybrid kernel that Adobe, Eclipse. Likewise with other projects e.g. scripting languages of Pascal, Delphi, SmallTalk are all equivalent notable as Perl, Python...etc. Most of them have web application framework, virtual machine, compliers...etc supporting them but if a template doesn't exist who will ever understand them. For the same reason if the XNU was not being place who will ever know their project. --Ramu50 (talk) 21:28, 29 November 2008 (UTC)[reply] Speaking of Distributions I think OpenSolaris needs to be removed. The distributions of OpenSolaris are just a random offering, they are no specific project development. --Ramu50 (talk) 21:40, 29 November 2008 (UTC)[reply] Changes I remove Mozilla Thunderbird, because it is not a package. I think you guys should of choose the article Mozilla Software Rebranding to be link with, it has more relationship with Linux, since it encompass Mozilla internet suite, e-mail clients and fork software altogether. Also the template needs to focus on Guiding the FOSS topics, not directing them to each individual software and advertising them. For now I am still trying to find a Office Suite template to replace Openoffice.org. Problems Software Packages are things like Google Desktop and GNU. Office suites are not packages, the basis of Office does encompass Word Processing, Presentations (e.g. PowerPoint), Spreadsheets, Publication (and that is why I added DTP template). If people want to know more about each types of software, the related template section is there to guide them and thus each of those template have further studies to guide them, if the viewer wish to learn about it. Currently trying to sort out the following I think I might need to place GNU license back on there, since I think it is a father of all FOSS license. All other license I think will be better off if we directed to a "list" article of a "comparison article." Section History KDE was place there, because Template:KDE show a strong evidence, that it is a major role in the Browser history, as it has its own DOM (KDOM), COM (KCOM), SVG development and therefore is equally notable as other browsers. --Ramu50 (talk) 22:07, 29 November 2008 (UTC)[reply] Choosing not to discuss with consensus is your own problem. If you choose to be bias and insist on claiming "incomprehensible" as your reply. Then your contributions won't be counted as part of consensus. --Ramu50 (talk) 00:41, 30 November 2008 (UTC)[reply] Mono/C#/GTK# Shouldn't Mono be included under 'development'? Floker (talk) 08:43, 4 March 2009 (UTC)[reply] FOSS Logo I would like to replace the current unofficial FOSS Logo (left) with another unofficial logo which better describes "Free and Open Source Software". The logo I wish to replace it with is this one (right) This logo is created from the GNU logo and the Open Source Logo. The objection by User:Ahunt was the following: disagree - there is a lot more to free software than Gnu While I agree that there are many organizations that contribute to free software (as there are many organizations which contribute to open source) it Richard Stallman who started the Free Software movement. The following paragraph comes from the Free Software page The free software movement was conceived in 1983 by Richard Stallman to satisfy the need for and to give the benefit of "software freedom" to computer users. The Free Software Foundation was founded in 1985 to provide the organizational structure which Stallman correctly foresaw would be necessary to advance his Free Software ideas. The use of the GNU logo is not attempting to say that GNU is the ONLY free software project. The logo is not attempting to be a measure of software that organizations have written. It IS attempting to describe the thinking behind FOSS which comes from both the GNU project and OSI. Perhaps you can think of a better logo than the current one and better than the one I proposed. And if so, that would be great because the current logo needs to be replaced. The current logo which is simply an the acronym for Free and Open Source Software written on a green square with curved edges is only relevant from the text on it. By that I mean, for example, if a company made a device which was a combination of a Stapler and Three Hole Puncher and the device was commonly called STHP, writing "STHP" on a green background hardly makes it a "logo" for the Stapler and Three Hole Puncher device. A logo should convey the subject it is representing. Going back to the example, unless the company had a color scheme that EVERYBODY would recognize and it used a very unique font, then a logo with the background and color scheme of the company which "STHP" written in the font would be appropriate but if not, it really doesn't have anything to do with the STHP besides the fact it says on on the logo. Ahunt - Knowing that the logo represents the thinking behind Free Software and Open Source Software and not the number of commits to any source tree, do you still have a problem with it? Harrisonmetz (talk) 14:57, 4 April 2009 (UTC)[reply] - Ahunt (talk) 15:18, 4 April 2009 (UTC)[reply] Alright, I see what you are saying. It would be great to reach a consensus of what people would prefer (does wikipedia have some sort of voting mechanism :P). When it comes to Free Software and Open Source Software it is a very touchy issue. Everyone is satisfied with the end product but the means to which it was reached, the motivations, and the ethics are different. Harrisonmetz (talk) 15:51, 4 April 2009 (UTC)[reply] If you go to the main Free Software page the logo still exists on the link to the portal (near the bottom of the page just up from the template box). Can that be removed as well. Harrisonmetz (talk) 02:26, 7 April 2009 (UTC)[reply] Important clarification for anybody following this discussion: the proposed logo would violate the OSI trademark, which they strongly police (see http://opensource.org/trademark) 81.102.83.190 (talk) 21:35, 7 April 2015 (UTC)[reply] Notable packages Why does "Notable packages" include only a single example of server stuff, browser, email client, windowing system, graphical UI, and office only? Why not graphics editors (3D and 2D: raster (Gimp), vector (Inkscape), movie/animation (Blender), other), text editors (most notable and the ones of the most sophisticated: Emacs and Vim), mathematical software (Maxima, R, Sage, Scilab, GNU Octave, SymPy, NumPy, other), some other types of software? Maybe it'd be better to link to other templates and note that one should look at the row labeled "Open Source": Template:Computer algebra systems Template:Numerical analysis software Template:Raster graphics editors Template:Vector graphics editors Template:Animation editors Template:Text editor Kazkaskazkasako (talk) 16:39, 10 October 2009 (UTC)[reply] I don't really agree with this change. It seems like the point of the "Notable packages" section was to quickly note some extremely widely used packages that most people would be familiar with. Right now it kind of just seems to be wasting space, List of open source software packages is not a very good or useful article. Sure there might be some debate about what items to list, but is that the only reason to remove the section? Cheers, — sligocki (talk) 19:59, 12 October 2009 (UTC)[reply] Semi-protection User:HJ Mitchell semi-protected this template with the summary "Protected Template:Linux: requested at RfPP, high visibility template using TW ([edit=autoconfirmed] (indefinite) [move=sysop] (indefinite))" but I am not seeing a history of vandalism in the edit summary. Is this really justified? - Ahunt (talk) 12:07, 5 June 2010 (UTC)[reply] Adding Software Freedom Conservancy under FOSS Organizations I would like to request to add Software Freedom Conservancy in the list of FOSS Organization template. OnesimusUnbound (talk) 04:45, 21 February 2011 (UTC)[reply] Spelling convention for prompts Is there supposed to be one? Right now it's kind of Canadian, with "organization" and "licence". But there's also a "license" mixed in, US style. Varlaam (talk) 19:19, 27 May 2011 (UTC)[reply] Beerware License Suggest adding Beerware as a license, as it is used in FreeBSD. I can't edit semi-protected pages yet. --Silentquasar (talk) 20:03, 1 November 2011 (UTC)[reply] Symbian Symbian is not a FOSS operating system. I request that it be removed from the list. 16:00, 25 April 2012 (UTC) — Preceding unsigned comment added by NuclearWizard (talk • contribs) Done - Ahunt (talk) 16:15, 25 April 2012 (UTC)[reply] Semi-protected edit request on 3 September 2014 I'd like to add Ethereum to the list of organizations in the Free and open-sourced software template. Anyone can work on its codebase, it's competely open source and can be forked at any point - https://github.com/ethereum 86.153.226.156 (talk) 20:05, 3 September 2014 (UTC)[reply] Add Sleepycat License to list of licenses Sleepycat License is a copyleft free software license recognized by OSI, FSF and DFSG (apparently). 2001:2003:54FA:D2:0:0:0:1 (talk) 19:07, 17 July 2017 (UTC)[reply] Add Python licenses to list of licenses The following articles cover free software licenses with recognition from the Free Software Foundation et al.: The articles have references for inclusion and verifying the free license status. 2001:2003:54FA:D2:0:0:0:1 (talk) 16:06, 20 July 2017 (UTC)[reply] Add Free genealogy software Please add a link to the Category:Free genealogy software all eight entries appear on this template Template:Genealogy_software in the Open source section at top. .Kickermoth (talk) 21:54, 12 October 2019 (UTC)[reply] Semi-protected edit request on 3 July 2023 37.99.39.44 (talk) 12:51, 3 July 2023 (UTC)[reply]
========================================
[SOURCE: https://en.wikipedia.org/wiki/Kenites] | [TOKENS: 4127]
Contents Kenites According to the Hebrew Bible, the Kenites/Qenites (/ˈkiːnaɪt/ or /ˈkɛnaɪt/; Hebrew: קֵינִי‎, romanized: Qēni) were a tribe in the ancient Levant. They settled in the towns and cities in the northeastern Negev in an area known as the "Negev of the Kenites" near Arad, and played an important role in the history of ancient Israel. One of the most recognized Kenites is Jethro, Moses's father-in-law, who was a shepherd and a priest in the land of Midian (Judges 1:16). Certain groups of Kenites settled among the Israelite population, including the descendants of Moses's brother-in-law, although the Kenites descended from Rechab maintained a distinct, nomadic lifestyle for some time. Other well-known Kenites were Heber, husband of Jael, the Biblical heroine who killed General Sisera, and Rechab, the ancestor of the Rechabites. Etymology The word qēni (קֵינִי) was a patronymic derived from qayin (Hebrew: קַיִן). There are several competing etymologies. According to the German Orientalist Wilhelm Gesenius, the name is derived from the name Cain, the same name as Cain, the son of Adam and Eve. However this may simply be the ancient Hebrew transliteration or phonetization of the Kenites' name in their own language. It could be related to other names, such as Kenan or Cainan. Other scholars have linked the name to the term "smith". According to Archibald Henry Sayce, the name Kenite is identical to an Aramaic word meaning a smith, which in its turn is a cognate of Hebrew qayin "lance". Historical identity The Kenites are a clan mentioned in the Bible as having settled on the southern border of the Kingdom of Judah. In I Samuel 30:29, in the time of David, the Kenites settled among the tribe of Judah.[better source needed] In Jeremiah 35:7-8 the Rechabites are described as tent-dwellers with an absolute prohibition against practicing agriculture; however, other Kenites are described elsewhere as city-dwellers (1 Samuel 30:29, 1 Chronicles 2:55).[citation needed] Hippolytus of Rome in his Chronicon of 234 appears to identify the Kinaidokolpitai of central Arabia with the biblical Kenites. In modern sources the Kenites are often depicted as technologically advanced nomadic blacksmiths who spread their culture and religion to Canaan. The suggestion that the Kenites were wandering smiths was first made by B. D. Stade in Beiträge zur Pentateuchkritik: dasKainszeichen in 1894 and has since become widespread. This view of the Kenites originated in Germany in the mid-1800s, and it is not reflected in any ancient Hebrew, Greek, Latin, or Arabic sources. In 1988, Meindert Dijkstra argued that an ancient inscription in a metal mine in the Sinai Peninsula contained a reference to "a chief of the Kenites" (rb bn qn). In the Bible Genesis 15:18-21 mentions the Kenites as living in or around Canaan as early as the time of Abraham. According to some traditions, Moses's father-in-law, Hobab, was a Kenite (Judges 1:16), although according to Exodus his father-in-law was instead a priest of Midian named Reuel (2:16-18) or Jethro (3:1). At the Exodus, Jethro and his clan inhabited the vicinity of Mount Sinai and Mount Horeb. (Exodus 3:1) In Exodus 3:1 Jethro is said to have been a "priest in the land of Midian" and in Numbers 10:29, Hobab is the son of Reuel, although the text is not clear which is Moses's father-in-law. In Judges 3:1, Hobab the Kenite is Moses' father-in-law. The confusion of these names has led many scholars to believe that the terms "Kenite" and "Midianite" are intended (at least in parts of the Bible) to be used interchangeably, or that the Kenites formed a part of the Midianite tribal grouping. The Kenites journeyed with the Israelites to Canaan (Judges 1:16); and their encampment, apart from the latter's, was noticed by Balaam. The Kenites were closely allied with Moses, and are not mentioned to have participated in the first invasion of Canaan (Numbers 14:39–45, Deuteronomy 1:41–46) that was conducted against Moses's orders. During the second invasion of Canaan (Numbers 21:1–4), the Kenites would have seen the area around the town of Arad, the region of Canaan that the next generation of Kenites would later choose as their place to settle after the conquest. When the Israelites and Kenites were camped at the foot of Mount Peor, King Balak of Moab allied himself with the five Kings of Midian, but seeing that they did not have the strength to defeat the Israelites, the leaders of Moab and Midian gathered together and paid a large fee to Balaam to put a curse on the Israelite camp from the high place (a type of religious shrine) on Mount Peor (Numbers 22:1–21). Balaam was unable to curse Israel, but prophesied about the Kenites, saying that they would endure, but foretold that someday they would be led away captive as slaves to Assur, (Numbers 24:21–22), with the question of how long their future slavery would last being unanswered. While the camp was still encamped on the west side of Mount Peor, the local Moabites attempted to include the Israelites in their worship of their god Baal of Peor. During the commotion and bloodshed, Moses's grandnephew Phinehas killed a Midianite princess, Cozbi, the daughter of King Zur, one of the five Kings of Midian (Numbers 25:14–18). Following this, Moses sent a strikeforce of 12,000 men (1000 from each Israelite tribe, the Kenites were not included) that succeed in killing the five kings Evi (אֱוִי), Rekem (רֶקֶם), Hur (חוּר), Reba (רֶבַע), and Zur (צַוָּר) the father of Cozbi, (Numbers 31:8, Joshua 13:21) and burned each of the Midianite cities and all of their encampments, taking their livestock (Numbers 31:1–12). The Kenites were not included in the invasion of Midian, it is unclear how the Kenites reacted to the fall of the Midianite kings that they had formerly been subject to. After the death of Moses, Joshua led the Israelite invasion of Canaan; conquering a large portion of central Canaan. Upon Joshua's death, the Israelite tribes of Judah and Simeon took action to conquer southern Canaan, defeating the Canaanites and the Perizzites at the Battle of Bezek (now Ibziq) in Judges 1:5. After Judah's sieges of Jerusalem and Debir, Judges 1:16 says that Jethro's Kenite descendants "went up from the City of Palms, (which appears to be Zoar or Tamar in the upper Arabah), with the men of Judah to live among the people of the Desert of Judah in the Negev near Arad." Following the conquest, the Israelites began to assimilate into the larger Canaanite culture and started converting to the Canaanite religion (Judges 2:11–13, Judges 3:1–7), only returning to their national religion when confronted by an 8-year invasion and occupation by the North Mesopotamians (from Aram-Naharaim) under King Cushan-Rishathaim. (Non-biblical sources show the diplomatic tension between Egypt and Naharin (Mitanni) first as military rivals, under Thutmose III and Shaushtatar but after a long-negotiated marriage alliance under Thutmose IV and Artatama I they became close allies.) After 8 years the Israelites made war against Naharaim. The Israelites rose up under the leadership of Othniel the son of Kenaz, (thus the nephew of Caleb, Judah's previous war-leader) who was a neighbor of the Kenites and lived in the same area (Judges 3:9–11). Although the text is brief, it is likely Othniel had reliable political support at-the-ready from his relatives the Calebites and Kenizzites, and probably from his Kenite neighbors as well, this likely gave him a large support base for the tribe of Judah to unite around. Later, King Eglon of Moab allied with the Kingdom of Ammon and nation of Amalek, in order to invade the territory of Israel. (Judges 3:12–15) After defeating the Israelites, Moab and Amalek took the City of Palms (believed to be the later city of Zoar or Tamar), from the Kenites. [ 2 Chronicles 28:15 defines the City of Palms as Jericho.] At this point, around 180 or 190 years after Joshua's invasion, the Canaanites in northern Canaan under King Jabin ruling from Hazor re-asserted their dominance over Canaan (Judges 4:1–3). The Israelite leader Shamgar appears to have been battling with the Philistines in south Canaan at the time, and was either caught off-guard, or unable to prevent the rising Canaanite military, economic, and political power. (Non-biblical sources depict the King of Hazor affirming loyalty to the Egyptian pharaoh, and joining the cities of Qatna and Mari to create a trade route that linked Egypt to Ekallatum) During this period, Heber the Kenite and his wife Jael separated from their Kenite brethren in the south, and went to live in northern Canaan (Judges 4:11). After two decades of North Canaanite dominance in the region, the prophetess Deborah, who was now leading Israel, commissioned Barak the son of Abinoam as her commander to lead the Israelites against the Canaanites. (Judges 4:4–10) King Jabin's general Sisera learned that Barak was massing troops on Mount Tabor, situated between Sisera's base at Harosheth Haggoyim (believed to now be Ahwat) and the Canaanite capital at Hazor, and set out northward to meet him with 900 chariots. The weather became unfavorable to Sisera's army, the sky became clouded (Judges 5:4–5), and the river that his chariots needed to cross was flooded. While Sisera attempted to ford his chariots through the torrential Kishon River at a river crossing close to the then-Canaanite city of Taanach (Now known as Ti'inik) near Megiddo (Judges 5:19–21), Barak's 10,000 men went down southwestward from Mount Tabor (Judges 4:14) to give battle on the plain and rivers. Sisera left his chariot behind and escaped the battle on foot, while Barak pursued the chariots that were fleeing back to the Canaanite base at Harosheth Haggoyim (Judges 4:15–16) As Sisera fled on foot near Kedesh-Naphtali, he was passing by the tent of Heber the Kenite, and Jael offered to shelter him. Accepting her offer, he asked her to stand in the doorway of the tent, and to deny his presence to anyone who was chasing him. However, once he was asleep, Jael hammered a tent peg into Sisera's head, and he died. (Judges 4:17–22, Judges 5:24–30) From that point onwards, Israel grew stronger and continued to press Hazor harder, until King Jabin's defeat. (Judges 4:23–24) In the time of King Saul there were Kenites living in Amalek territory. When King Saul of Israel went to war against Amalek, the kindness which the Kenites had shown to Israel in the wilderness was gratefully remembered. "Ye showed kindness to all the children of Israel, when they came up out of Egypt," said Saul to them (1 Samuel 15:6); and so not only were they spared by King Saul, but later in the war David also sent a share of the spoil that he took from the Amalekites to the civic elders of the cities of the Kenites. (1 Samuel 30:26–31) In King Rehoboam's fifth year the Negev, including the Negev of the Kenites, was briefly occupied by the Egyptians during Pharaoh Shishak's (Shoshenq I) campaign into southern Palestine mentioned in 1 Kings 14:25–26 and 2 Chronicles 12:2–12. The fortifications of Arad and "Great" Arad are listed on Row VIII of the Bubastite Portal as falling to Shoshenq after Shaaraim and before Yeruham. While the Kenite territory in the Negev had earlier been seen as a separate territory from the parts of the Negev held by Judah and the Simeonites, as the Israelites grew in power, the Negev would be mentioned in the later histories as a single region and integral part of the Kingdom of Judah. In the northern Negev, the city of Arad served as a key administrative and military stronghold for the Kingdom of Judah. It protected the route from the Judaean Mountains to the Arabah and on to Moab and Edom. It underwent numerous renovations and extensions. Archeology The Kenites have been proposed as a reason for the appearance of Midianite pottery imported into the Negev of the Kenites during the 1200s and 1100s BC. Petrographic studies carried out on some of the Timna wares led to the conclusion that they originated in the Hejaz, most probably in the site of Qurayya in Saudi Arabia. J. Gunneweg analyzed pottery samples with the help of The Hebrew University and the University of Bonn in 1991. The Midianite pottery found in the Negev was linked to a kiln discovered at Qurayya, Saudi Arabia, through Neutron activation analysis. Excavations at the site of Horvat Uza, and in an ostraca from Arad, seem to indicate the presence of Kenite groups in the Negev in monarchic Judah. Israeli historian Nadav Na'aman argues that the absence of anthropomorphic and other figurines at the site points to the Kenite settlers practicing aniconism. The upper and lower areas of Tel Arad were excavated during 18 seasons by Ruth Amiran and Yohanan Aharoni between 1962 and 1984. An additional 8 seasons were done on the Iron Age water system. The Tel Arad temple was uncovered by archaeologist Yohanan Aharoni during the first excavation season in 1962. The temple complex was destroyed by an earthquake around 800 BC. At the time of its destruction, the worship of Jehovah was joined by smaller altar. When the two altars were submitted for organic residue analysis, several cannabis derivates were detected on the smaller altar: THC, CBD, and CBN; when combined with discoveries at other sites, the use of woven hemp fabric has been linked to the worship of the goddess Asherah. In 2019, Margreet L. Steiner noted the architectural similarities between the temple at Arad and the temple found at Khirbat Ataruz. Critical scholarship According to the Kenite hypothesis proposed by the German writer Friedrich Wilhelm Ghillany, Yahweh was historically a Midian deity, and the association of Moses's father-in-law with Midian reflects the historical adoption of the Midianite cult by the Hebrews. Moses apparently identified Jethro's concept of a god, Yahweh, with the Israelites' god El Shaddai. The Kenite hypothesis supposes that the Hebrews adopted the cult of Yahweh from the Midianites via the Kenites. This view, first proposed by Friedrich Wilhelm Ghillany in 1862, afterward independently by the Dutch scholar of religion Cornelis Tiele in 1872, and more fully by the German critical scholar Bernhard Stade, has been more completely worked out by the German theologian Karl Budde; it is accepted by the German Semitic scholar Hermann Guthe, Gerrit Wildeboer, H. P. Smith, and George Aaron Barton. Another theory is that a confederation of regional tribes were connected to monotheistic ritual at Sinai. Some biblical scholars postulated that the Kenites were descendants of the mythical Cain. The German orientalist Wilhelm Gesenius asserted that the name is derived from the name Cain (קַיִן Qayin). The German orientalist Walter Beltz alternatively proposed that the story of Cain and Abel was not originally about the murder of a brother, but a myth about the murder of a god's child. In his reading of Genesis 4:1, Eve conceived Cain by Adam, and her second son Abel by another man, this being Yahweh. Eve is thus compared to the Sacred Queen of antiquity, the Mother goddess. Consequently, Yahweh pays heed to Abel's offerings, but not to Cain's. After Cain kills Abel, Yahweh condemns Cain, the murderer of his son, to the cruelest punishment imaginable among humans: banishment. Beltz believed this to be the foundational myth of the Kenites, a clan settled on the southern border of Judah that eventually resettled among the tribes of Judah. It seemed clear to him that the purpose of this myth was to explain the difference between the nomadic and sedentary populations of Judah, with those living from their livestock (pastoralists, not raising crops) under the special protection of Yahweh. Ronald Hendel believes the Israelites linked the Kenites to Cain to give them a "shameful, violent ancestral origin". According to the critical interpretation of the Biblical data, the Kenites were a clan settled on the southern border of Judah, originally characterized by a semi-nomadic lifestyle and involved in the copper industry in the Aravah region. In the 1899 Hastings' Dictionary of the Bible, Archibald Sayce suggested that the Kenites were a tribe of smiths. Based on the biblical references, proposed etymological linkage of the name 'Kenite' to blacksmithing and other evidence, various scholars have associated the Kenites with coppersmithery and metalwork. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/F%C3%A9d%C3%A9ration_A%C3%A9ronautique_Internationale] | [TOKENS: 1166]
Contents Fédération Aéronautique Internationale The World Air Sports Federation (French: Fédération aéronautique internationale; FAI) is the world governing body for air sports, and also stewards definitions regarding human spaceflight. It was founded on 14 October 1905, and is headquartered in Lausanne, Switzerland. It maintains world records for aeronautical activities, including ballooning, aeromodeling, and unmanned aerial vehicles (drones), as well as flights into space. History The FAI was founded at a conference held in Paris 12–14 October 1905, which was organized following a resolution passed by the Olympic Congress held in Brussels on 10 June 1905 calling for the creation of an Association "to regulate the sport of flying, ... the various aviation meetings and advance the science and sport of Aeronautics." The conference was attended by representatives from 8 countries: Belgium (Aéro Club Royal de Belgique, founded 1901), France (Aéro-Club de France, 1898), Germany (Deutscher Luftschiffer Verband aka "German Airship League", founded 1902), Great Britain (Royal Aero Club, 1901), Italy (Aero Club d'Italia [it], 1904), Spain (Real Aero Club de España [es],1905), Switzerland (Aero-Club der Schweiz, 1900) and the United States (Aero Club of America, 1905). On 2 February 2017 the FAI announced its new strategic partnership with international asset management firm Noosphere Ventures. FAI Secretary General Susanne Schödel, FAI President Frits Brink and Noosphere Ventures Managing Partner Max Polyakov signed the agreement, making Noosphere Ventures FAI's Global Technical Partner. The FAI suspended Russia and Belarus due to the 2022 Russian invasion of Ukraine, as a result of which pilots from Russia and Belarus will not be able to compete in any FAI-sanctioned event in the 13 FAI air sports disciplines, including paragliding, hang gliding, and paramotoring. FAI General Conference The 117th FAI General Conference took place in Dayton, Ohio, US (the 'Birthplace of Aviation') on 26 and 27 October 2023. The 118th FAI General Conference was held from 20 to 21 November 2024 in Riyadh, Saudi Arabia. The 119th FAI General Conference will take place in Vantaa, Finland from 22 to 24 October 2025. Sports 13 Sports: Air Sport Commissions Events All the events sanctioned by the FAI are listed in its events calendar. The year that the first World Championship in each class (see Model aircraft competitions and classes took place are as follows: Activities The FAI is the international governing body for the following activities: The FAI establishes the standards for records in the activities. Where these are air sports, the FAI also oversees international competitions at world and continental levels, and also organizes the World Air Games and FAI World Grand Prix. The FAI organises the FAI International Drones Conference and Expo. This event offers a platform for organisations, businesses and individuals to discuss how drones are used today and to create a framework for how they will be used and impact on life in the future. The FAI also keeps records set in human spaceflight, through the FAI Astronautic Records Commission (International Astronautic Records Commission – ICARE) The FAI defines the limit between Earth's atmosphere and outer space, the so-called Kármán line, as the altitude of 100 kilometres (62 miles; 330,000 feet) above Earth's sea level. Records Among the FAI's responsibilities are the verification of record-breaking flights. For a flight to be registered as a "World Record," it has to comply with the FAI's strict rules, which include a proviso that the record must exceed the previous record by a certain percentage. Since the late 1930s, military aircraft have dominated some classes of record for powered aircraft such as speed, distance, payload, and height, though other classes are regularly claimed by civilians. Some records are claimed by countries as their own, even though their achievements fail to meet FAI standards. These claims are not typically granted the status of official records. For example, Yuri Gagarin earned recognition for the first manned spaceflight, despite failing to meet FAI requirements. The FAI initially did not recognize the achievement because he did not land in his Vostok spacecraft (he ejected from it), but later it recognized that Gagarin was the first human to fly into space. The FAI then established the Yuri A. Gagarin Gold Medal, which has been awarded since 1968. The following types of craft have records: Awards The FAI Gold Air Medal was established in 1924 and was first awarded in 1925. It is reserved for those who have contributed greatly to the development of aeronautics by their activities, work, achievements, initiative or devotion to the cause of Aviation. The FAI has also awarded the Paul Tissandier Diploma since 1952 to those who have served the cause of aviation in general and sporting aviation in particular. The FAI also makes awards for each of the following air sports. FAI Young Artists Contest The FAI Young Artists Contest is an international painting competition for youngsters between the ages of 6 and 17. Each FAI Member Country organises the contest in their country, and the national winners are submitted to the International Jury each year. Members See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#CITEREFEdge_staff1995b] | [TOKENS: 10728]
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Special:EditPage/Template:FOSS] | [TOKENS: 57]
View source for Template:FOSS You do not have permission to edit this page, for the following reason: Why is the page protected? What can I do? Submit an edit request Pages transcluded onto the current version of this page (help): Return to Template:FOSS.
========================================
[SOURCE: https://en.wikipedia.org/wiki/Awlam] | [TOKENS: 926]
Contents Awlam Awlam (Arabic: عولم) was a Palestinian village 15 kilometres (9.3 mi) south of Tiberias situated on the slopes of the westward Wadi Awlam. In the late Ottoman period it was a ruin, resettled by Algerian migrants under the auspices of the Ottomans. History Awlam is identified as Oulamma, an important town that existed during the Roman era of rule in Palestine. Ceramics from the Byzantine era have been found here. The Crusaders referred to it as Heulem. In 1144 the tithes of the village was given to the bishop of Tiberias. In 1174, the Bishop conceded its tithes to the church of Mount Tabor. Awlam was incorporated into the Ottoman Empire in 1517, and by 1596 it was a village under the administration of the nahiya ("subdistrict") of Tiberias, part of the sanjak of Safad. The village had a population of 12 households and 3 bachelors, an estimated 83 persons, all Muslims. They paid a fixed tax-rate of 25% on wheat, barley, goats, and beehives, in addition to occasional revenues; a total of 3,409 Akçe. A map by Pierre Jacotin from Napoleon's invasion of 1799 showed the place, named as El Awalem. In 1838 it was noted as a village, 'Aulam, in the Tiberias District. In 1859 there were 120 souls in the village, and the cultivation was 14 feddans, according to the British consul Rogers. However, when Victor Guérin visited in 1875, he described the village as “abandoned”. He further noted; “Ancient materials are plentiful there. I noticed in particular a number of column stumps and various fragments of sculptures coming from some building now destroyed. A church, converted later into a mosque, then into a stable, is quite well preserved. It had been built with alternately white and black stones, the former limestone, the latter basalt. On the lintel of the main entrance door one may observe, in the centre, a small circle, which formerly enclosed a cross, today completely effaced. Inside, some column shafts are lying on the ground, with their capitals broken. In the late 19th century, Awlam was one of several villages settled by Algerian migrants under the auspices of the Ottoman Empire. The settlers in Awlam originated in Ein Bessem, Bouïra. In 1882, it was described as an agricultural village of 120, built of adobe bricks. The Ottomans built an elementary school in this time period. A population list from about 1887 showed Aulam to have about 575 inhabitants; all Muslims. The 'Arab al-Muwaylhat Bedouin tribe settled in the village by the time Awlam was a part of the British Mandate of Palestine. The village had a mosque, but its school was closed down. In the 1922 census of Palestine, Ulam had a population of 496; 487 Muslims, 8 Jews and 1 Christian, where the one Christian was of the Orthodox faith. The population had increased to 555 in the 1931 census, all Muslims, in a total of 139 houses. The villagers cultivated grain, figs, grapes, and pomegranates. They drew their drinking and domestic water from six different springs. By the 1945 statistics, the village population was 720 Muslims, and the total land area was 18,546 dunums of land. 360 dunams were irrigated or used for orchards, 11,139 used for cereals, while 28 dunams were classified as built-up (urban) land. During the 1948 Arab-Israeli War, Awlam's villagers were ordered to leave on April 6, 1948, by the Arab Higher Committee who feared they might aid "Zionist forces". But the Haganah states that its Golani Brigade entered the village on May 12, and the inhabitants fled upon their arrival. Awlam became the final village in the eastern Lower Galilee emptied of its Arab inhabitants. According to Walid Khalidi, "nothing remains of the village buildings except stone rubble; only a spring that was used by the villagers has been left unchanged". References Bibliography External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Jerahmeel] | [TOKENS: 329]
Contents Jerahmeel The name Jerahmeel (Hebrew יְרַחְמְאֵל, Yəraḥməʾēl; Greek ιραμεηλ) appears several times in the Tanakh. It means "He will obtain mercy of God", "God pities", "May God have compassion", or "May God pity". Bearers of the name There are probably three distinct persons of that name in the Tanakh. In order of their lifetimes they are: The Jerahmeelites The Jerahmeelites were a people, presumably descended from Jerahmeel number 1 above, living in the Negev, who David, while in service with the Philistines, claimed to have attacked in 1 Samuel 27:10, but with whom he was really on friendly terms according to 1 Samuel 30:29. Thomas Kelly Cheyne developed a theory that made the Jerahmeelites into a significant part of the history of Israel but most subsequent scholars have dismissed his ideas as fanciful. An archangel In some deuterocanonical and apocryphal writings, there are references to an archangel variously called Jeremiel, Eremiel, Remiel, etc. See the article Jerahmeel (archangel). Chronicles of Jerahmeel The Chronicles of Jerahmeel is a medieval document ascribed to the 12th-century Jewish historian Jerahmeel ben Solomon. It is unrelated to any of the above. References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Public_opinion_on_extraterrestrial_life] | [TOKENS: 216]
Contents Public opinion on extraterrestrial life Public opinion on extraterrestrial life refers to the collective beliefs, attitudes, and statistical data regarding the existence of extraterrestrial life and extraterrestrial intelligence (ETI). While scientific consensus has historically been cautious, public opinion surveys from the late 20th and early 21st centuries indicate a widespread belief in the existence of intelligent alien civilizations, often exceeding the confidence levels expressed by the scientific community. Global surveys Research conducted on an international scale has revealed significant belief in extraterrestrial civilizations across diverse cultures. National surveys (United States) Public opinion in the United States has been frequently polled, revealing a trend of increasing belief in extraterrestrial phenomena. Expert vs. public opinion Recent academic efforts have attempted to quantify the gap between public perception and expert consensus. Demographic variables Belief in extraterrestrial life varies significantly across different demographic groups. Surveys indicate a negative correlation between high religious observance and belief in extraterrestrial intelligence. Cultural interest Public interest in the subject is also measured through digital behavior and media consumption. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Black_hole#cite_note-229] | [TOKENS: 13839]
Contents Black hole A black hole is an astronomical body so compact that its gravity prevents anything, including light, from escaping. Albert Einstein's theory of general relativity predicts that a sufficiently compact mass will form a black hole. The boundary of no escape is called the event horizon. In general relativity, a black hole's event horizon seals an object's fate but produces no locally detectable change when crossed. General relativity also predicts that every black hole should have a central singularity, where the curvature of spacetime is infinite. In many ways, a black hole acts like an ideal black body, as it reflects no light. Quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is of the order of billionths of a kelvin for stellar black holes, making it essentially impossible to observe directly. Objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. In 1916, Karl Schwarzschild found the first modern solution of general relativity that would characterise a black hole. Due to his influential research, the Schwarzschild metric is named after him. David Finkelstein, in 1958, first interpreted Schwarzschild's model as a region of space from which nothing can escape. Black holes were long considered a mathematical curiosity; it was not until the 1960s that theoretical work showed they were a generic prediction of general relativity. The first black hole known was Cygnus X-1, identified by several researchers independently in 1971. Black holes typically form when massive stars collapse at the end of their life cycle. After a black hole has formed, it can grow by absorbing mass from its surroundings. Supermassive black holes of millions of solar masses may form by absorbing other stars and merging with other black holes, or via direct collapse of gas clouds. There is consensus that supermassive black holes exist in the centres of most galaxies. The presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as visible light. Matter falling toward a black hole can form an accretion disk of infalling plasma, heated by friction and emitting light. In extreme cases, this creates a quasar, some of the brightest objects in the universe. Merging black holes can also be detected by observation of the gravitational waves they emit. If other stars are orbiting a black hole, their orbits can be used to determine the black hole's mass and location. Such observations can be used to exclude possible alternatives such as neutron stars. In this way, astronomers have identified numerous stellar black hole candidates in binary systems and established that the radio source known as Sagittarius A*, at the core of the Milky Way galaxy, contains a supermassive black hole of about 4.3 million solar masses. History The idea of a body so massive that even light could not escape was first proposed in the late 18th century by English astronomer and clergyman John Michell and independently by French scientist Pierre-Simon Laplace. Both scholars proposed very large stars in contrast to the modern concept of an extremely dense object. Michell's idea, in a short part of a letter published in 1784, calculated that a star with the same density but 500 times the radius of the sun would not let any emitted light escape; the surface escape velocity would exceed the speed of light.: 122 Michell correctly hypothesized that such supermassive but non-radiating bodies might be detectable through their gravitational effects on nearby visible bodies. In 1796, Laplace mentioned that a star could be invisible if it were sufficiently large while speculating on the origin of the Solar System in his book Exposition du Système du Monde. Franz Xaver von Zach asked Laplace for a mathematical analysis, which Laplace provided and published in a journal edited by von Zach. In 1905, Albert Einstein showed that the laws of electromagnetism would be invariant under a Lorentz transformation: they would be identical for observers travelling at different velocities relative to each other. This discovery became known as the principle of special relativity. Although the laws of mechanics had already been shown to be invariant, gravity remained yet to be included.: 19 In 1907, Einstein published a paper proposing his equivalence principle, the hypothesis that inertial mass and gravitational mass have a common cause. Using the principle, Einstein predicted the redshift and half of the lensing effect of gravity on light; the full prediction of gravitational lensing required development of general relativity.: 19 By 1915, Einstein refined these ideas into his general theory of relativity, which explained how matter affects spacetime, which in turn affects the motion of other matter. This formed the basis for black hole physics. Only a few months after Einstein published the field equations describing general relativity, astrophysicist Karl Schwarzschild set out to apply the idea to stars. He assumed spherical symmetry with no spin and found a solution to Einstein's equations.: 124 A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the same solution. At a certain radius from the center of the mass, the Schwarzschild solution became singular, meaning that some of the terms in the Einstein equations became infinite. The nature of this radius, which later became known as the Schwarzschild radius, was not understood at the time. Many physicists of the early 20th century were skeptical of the existence of black holes. In a 1926 popular science book, Arthur Eddington critiqued the idea of a star with mass compressed to its Schwarzschild radius as a flaw in the then-poorly-understood theory of general relativity.: 134 In 1939, Einstein himself used his theory of general relativity in an attempt to prove that black holes were impossible. His work relied on increasing pressure or increasing centrifugal force balancing the force of gravity so that the object would not collapse beyond its Schwarzschild radius. He missed the possibility that implosion would drive the system below this critical value.: 135 By the 1920s, astronomers had classified a number of white dwarf stars as too cool and dense to be explained by the gradual cooling of ordinary stars. In 1926, Ralph Fowler showed that quantum-mechanical degeneracy pressure was larger than thermal pressure at these densities.: 145 In 1931, Subrahmanyan Chandrasekhar calculated that a non-rotating body of electron-degenerate matter below a certain limiting mass is stable, and by 1934 he showed that this explained the catalog of white dwarf stars.: 151 When Chandrasekhar announced his results, Eddington pointed out that stars above this limit would radiate until they were sufficiently dense to prevent light from exiting, a conclusion he considered absurd. Eddington and, later, Lev Landau argued that some yet unknown mechanism would stop the collapse. In the 1930s, Fritz Zwicky and Walter Baade studied stellar novae, focusing on exceptionally bright ones they called supernovae. Zwicky promoted the idea that supernovae produced stars with the density of atomic nuclei—neutron stars—but this idea was largely ignored.: 171 In 1939, based on Chandrasekhar's reasoning, J. Robert Oppenheimer and George Volkoff predicted that neutron stars below a certain mass limit, later called the Tolman–Oppenheimer–Volkoff limit, would be stable due to neutron degeneracy pressure. Above that limit, they reasoned that either their model would not apply or that gravitational contraction would not stop.: 380 John Archibald Wheeler and two of his students resolved questions about the model behind the Tolman–Oppenheimer–Volkoff (TOV) limit. Harrison and Wheeler developed the equations of state relating density to pressure for cold matter all the way through electron degeneracy and neutron degeneracy. Masami Wakano and Wheeler then used the equations to compute the equilibrium curve for stars, relating mass to circumference. They found no additional features that would invalidate the TOV limit. This meant that the only thing that could prevent black holes from forming was a dynamic process ejecting sufficient mass from a star as it cooled.: 205 The modern concept of black holes was formulated by Robert Oppenheimer and his student Hartland Snyder in 1939.: 80 In the paper, Oppenheimer and Snyder solved Einstein's equations of general relativity for an idealized imploding star, in a model later called the Oppenheimer–Snyder model, then described the results from far outside the star. The implosion starts as one might expect: the star material rapidly collapses inward. However, as the density of the star increases, gravitational time dilation increases and the collapse, viewed from afar, seems to slow down further and further until the star reaches its Schwarzschild radius, where it appears frozen in time.: 217 In 1958, David Finkelstein identified the Schwarzschild surface as an event horizon, calling it "a perfect unidirectional membrane: causal influences can cross it in only one direction". In this sense, events that occur inside of the black hole cannot affect events that occur outside of the black hole. Finkelstein created a new reference frame to include the point of view of infalling observers.: 103 Finkelstein's new frame of reference allowed events at the surface of an imploding star to be related to events far away. By 1962 the two points of view were reconciled, convincing many skeptics that implosion into a black hole made physical sense.: 226 The era from the mid-1960s to the mid-1970s was the "golden age of black hole research", when general relativity and black holes became mainstream subjects of research.: 258 In this period, more general black hole solutions were found. In 1963, Roy Kerr found the exact solution for a rotating black hole. Two years later, Ezra Newman found the cylindrically symmetric solution for a black hole that is both rotating and electrically charged. In 1967, Werner Israel found that the Schwarzschild solution was the only possible solution for a nonspinning, uncharged black hole, meaning that a Schwarzschild black hole would be defined by its mass alone. Similar identities were later found for Reissner-Nordstrom and Kerr black holes, defined only by their mass and their charge or spin respectively. Together, these findings became known as the no-hair theorem, which states that a stationary black hole is completely described by the three parameters of the Kerr–Newman metric: mass, angular momentum, and electric charge. At first, it was suspected that the strange mathematical singularities found in each of the black hole solutions only appeared due to the assumption that a black hole would be perfectly spherically symmetric, and therefore the singularities would not appear in generic situations where black holes would not necessarily be symmetric. This view was held in particular by Vladimir Belinski, Isaak Khalatnikov, and Evgeny Lifshitz, who tried to prove that no singularities appear in generic solutions, although they would later reverse their positions. However, in 1965, Roger Penrose proved that general relativity without quantum mechanics requires that singularities appear in all black holes. Astronomical observations also made great strides during this era. In 1967, Antony Hewish and Jocelyn Bell Burnell discovered pulsars and by 1969, these were shown to be rapidly rotating neutron stars. Until that time, neutron stars, like black holes, were regarded as just theoretical curiosities, but the discovery of pulsars showed their physical relevance and spurred a further interest in all types of compact objects that might be formed by gravitational collapse. Based on observations in Greenwich and Toronto in the early 1970s, Cygnus X-1, a galactic X-ray source discovered in 1964, became the first astronomical object commonly accepted to be a black hole. Work by James Bardeen, Jacob Bekenstein, Carter, and Hawking in the early 1970s led to the formulation of black hole thermodynamics. These laws describe the behaviour of a black hole in close analogy to the laws of thermodynamics by relating mass to energy, area to entropy, and surface gravity to temperature. The analogy was completed: 442 when Hawking, in 1974, showed that quantum field theory implies that black holes should radiate like a black body with a temperature proportional to the surface gravity of the black hole, predicting the effect now known as Hawking radiation. While Cygnus X-1, a stellar-mass black hole, was generally accepted by the scientific community as a black hole by the end of 1973, it would be decades before a supermassive black hole would gain the same broad recognition. Although, as early as the 1960s, physicists such as Donald Lynden-Bell and Martin Rees had suggested that powerful quasars in the center of galaxies were powered by accreting supermassive black holes, little observational proof existed at the time. However, the Hubble Space Telescope, launched decades later, found that supermassive black holes were not only present in these active galactic nuclei, but that supermassive black holes in the center of galaxies were ubiquitous: Almost every galaxy had a supermassive black hole at its center, many of which were quiescent. In 1999, David Merritt proposed the M–sigma relation, which related the dispersion of the velocity of matter in the center bulge of a galaxy to the mass of the supermassive black hole at its core. Subsequent studies confirmed this correlation. Around the same time, based on telescope observations of the velocities of stars at the center of the Milky Way galaxy, independent work groups led by Andrea Ghez and Reinhard Genzel concluded that the compact radio source in the center of the galaxy, Sagittarius A*, was likely a supermassive black hole. On 11 February 2016, the LIGO Scientific Collaboration and Virgo Collaboration announced the first direct detection of gravitational waves, named GW150914, representing the first observation of a black hole merger. At the time of the merger, the black holes were approximately 1.4 billion light-years away from Earth and had masses of 30 and 35 solar masses.: 6 In 2017, Rainer Weiss, Kip Thorne, and Barry Barish, who had spearheaded the project, were awarded the Nobel Prize in Physics for their work. Since the initial discovery in 2015, hundreds more gravitational waves have been observed by LIGO and another interferometer, Virgo. On 10 April 2019, the first direct image of a black hole and its vicinity was published, following observations made by the Event Horizon Telescope (EHT) in 2017 of the supermassive black hole in Messier 87's galactic centre. In 2022, the Event Horizon Telescope collaboration released an image of the black hole in the center of the Milky Way galaxy, Sagittarius A*; The data had been collected in 2017. In 2020, the Nobel Prize in Physics was awarded for work on black holes. Andrea Ghez and Reinhard Genzel shared one-half for their discovery that Sagittarius A* is a supermassive black hole. Penrose received the other half for his work showing that the mathematics of general relativity requires the formation of black holes. Cosmologists lamented that Hawking's extensive theoretical work on black holes would not be honored since he died in 2018. In December 1967, a student reportedly suggested the phrase black hole at a lecture by John Wheeler; Wheeler adopted the term for its brevity and "advertising value", and Wheeler's stature in the field ensured it quickly caught on, leading some to credit Wheeler with coining the phrase. However, the term was used by others around that time. Science writer Marcia Bartusiak traces the term black hole to physicist Robert H. Dicke, who in the early 1960s reportedly compared the phenomenon to the Black Hole of Calcutta, notorious as a prison where people entered but never left alive. The term was used in print by Life and Science News magazines in 1963, and by science journalist Ann Ewing in her article "'Black Holes' in Space", dated 18 January 1964, which was a report on a meeting of the American Association for the Advancement of Science held in Cleveland, Ohio. Definition A black hole is generally defined as a region of spacetime from which no information-carrying signals or objects can escape. However, verifying an object as a black hole by this definition would require waiting for an infinite time and at an infinite distance from the black hole to verify that indeed, nothing has escaped, and thus cannot be used to identify a physical black hole. Broadly, physicists do not have a precisely-agreed-upon definition of a black hole. Among astrophysicists, a black hole is a compact object with a mass larger than four solar masses. A black hole may also be defined as a reservoir of information: 142 or a region where space is falling inwards faster than the speed of light. Properties The no-hair theorem postulates that, once it achieves a stable condition after formation, a black hole has only three independent physical properties: mass, electric charge, and angular momentum; the black hole is otherwise featureless. If the conjecture is true, any two black holes that share the same values for these properties, or parameters, are indistinguishable from one another. The degree to which the conjecture is true for real black holes is currently an unsolved problem. The simplest static black holes have mass but neither electric charge nor angular momentum. According to Birkhoff's theorem, these Schwarzschild black holes are the only vacuum solution that is spherically symmetric. Solutions describing more general black holes also exist. Non-rotating charged black holes are described by the Reissner–Nordström metric, while the Kerr metric describes a non-charged rotating black hole. The most general stationary black hole solution known is the Kerr–Newman metric, which describes a black hole with both charge and angular momentum. The simplest static black holes have mass but neither electric charge nor angular momentum. Contrary to the popular notion of a black hole "sucking in everything" in its surroundings, from far away, the external gravitational field of a black hole is identical to that of any other body of the same mass. While a black hole can theoretically have any positive mass, the charge and angular momentum are constrained by the mass. The total electric charge Q and the total angular momentum J are expected to satisfy the inequality Q 2 4 π ϵ 0 + c 2 J 2 G M 2 ≤ G M 2 {\displaystyle {\frac {Q^{2}}{4\pi \epsilon _{0}}}+{\frac {c^{2}J^{2}}{GM^{2}}}\leq GM^{2}} for a black hole of mass M. Black holes with the maximum possible charge or spin satisfying this inequality are called extremal black holes. Solutions of Einstein's equations that violate this inequality exist, but they do not possess an event horizon. These are so-called naked singularities that can be observed from the outside. Because these singularities make the universe inherently unpredictable, many physicists believe they could not exist. The weak cosmic censorship hypothesis, proposed by Sir Roger Penrose, rules out the formation of such singularities, when they are created through the gravitational collapse of realistic matter. However, this theory has not yet been proven, and some physicists believe that naked singularities could exist. It is also unknown whether black holes could even become extremal, forming naked singularities, since natural processes counteract increasing spin and charge when a black hole becomes near-extremal. The total mass of a black hole can be estimated by analyzing the motion of objects near the black hole, such as stars or gas. All black holes spin, often fast—One supermassive black hole, GRS 1915+105 has been estimated to spin at over 1,000 revolutions per second. The Milky Way's central black hole Sagittarius A* rotates at about 90% of the maximum rate. The spin rate can be inferred from measurements of atomic spectral lines in the X-ray range. As gas near the black hole plunges inward, high energy X-ray emission from electron-positron pairs illuminates the gas further out, appearing red-shifted due to relativistic effects. Depending on the spin of the black hole, this plunge happens at different radii from the hole, with different degrees of redshift. Astronomers can use the gap between the x-ray emission of the outer disk and the redshifted emission from plunging material to determine the spin of the black hole. A newer way to estimate spin is based on the temperature of gasses accreting onto the black hole. The method requires an independent measurement of the black hole mass and inclination angle of the accretion disk followed by computer modeling. Gravitational waves from coalescing binary black holes can also provide the spin of both progenitor black holes and the merged hole, but such events are rare. A spinning black hole has angular momentum. The supermassive black hole in the center of the Messier 87 (M87) galaxy appears to have an angular momentum very close to the maximum theoretical value. That uncharged limit is J ≤ G M 2 c , {\displaystyle J\leq {\frac {GM^{2}}{c}},} allowing definition of a dimensionless spin magnitude such that 0 ≤ c J G M 2 ≤ 1. {\displaystyle 0\leq {\frac {cJ}{GM^{2}}}\leq 1.} Most black holes are believed to have an approximately neutral charge. For example, Michal Zajaček, Arman Tursunov, Andreas Eckart, and Silke Britzen found the electric charge of Sagittarius A* to be at least ten orders of magnitude below the theoretical maximum. A charged black hole repels other like charges just like any other charged object. If a black hole were to become charged, particles with an opposite sign of charge would be pulled in by the extra electromagnetic force, while particles with the same sign of charge would be repelled, neutralizing the black hole. This effect may not be as strong if the black hole is also spinning. The presence of charge can reduce the diameter of the black hole by up to 38%. The charge Q for a nonspinning black hole is bounded by Q ≤ G M , {\displaystyle Q\leq {\sqrt {G}}M,} where G is the gravitational constant and M is the black hole's mass. Classification Black holes can have a wide range of masses. The minimum mass of a black hole formed by stellar gravitational collapse is governed by the maximum mass of a neutron star and is believed to be approximately two-to-four solar masses. However, theoretical primordial black holes, believed to have formed soon after the Big Bang, could be far smaller, with masses as little as 10−5 grams at formation. These very small black holes are sometimes called micro black holes. Black holes formed by stellar collapse are called stellar black holes. Estimates of their maximum mass at formation vary, but generally range from 10 to 100 solar masses, with higher estimates for black holes progenated by low-metallicity stars. The mass of a black hole formed via a supernova has a lower bound: If the progenitor star is too small, the collapse may be stopped by the degeneracy pressure of the star's constituents, allowing the condensation of matter into an exotic denser state. Degeneracy pressure occurs from the Pauli exclusion principle—Particles will resist being in the same place as each other. Smaller progenitor stars, with masses less than about 8 M☉, will be held together by the degeneracy pressure of electrons and will become a white dwarf. For more massive progenitor stars, electron degeneracy pressure is no longer strong enough to resist the force of gravity and the star will be held together by neutron degeneracy pressure, which can occur at much higher densities, forming a neutron star. If the star is still too massive, even neutron degeneracy pressure will not be able to resist the force of gravity and the star will collapse into a black hole.: 5.8 Stellar black holes can also gain mass via accretion of nearby matter, often from a companion object such as a star. Black holes that are larger than stellar black holes but smaller than supermassive black holes are called intermediate-mass black holes, with masses of approximately 102 to 105 solar masses. These black holes seem to be rarer than their stellar and supermassive counterparts, with relatively few candidates having been observed. Physicists have speculated that such black holes may form from collisions in globular and star clusters or at the center of low-mass galaxies. They may also form as the result of mergers of smaller black holes, with several LIGO observations finding merged black holes within the 110-350 solar mass range. The black holes with the largest masses are called supermassive black holes, with masses more than 106 times that of the Sun. These black holes are believed to exist at the centers of almost every large galaxy, including the Milky Way. Some scientists have proposed a subcategory of even larger black holes, called ultramassive black holes, with masses greater than 109-1010 solar masses. Theoretical models predict that the accretion disc that feeds black holes will be unstable once a black hole reaches 50-100 billion times the mass of the Sun, setting a rough upper limit to black hole mass. Structure While black holes are conceptually invisible sinks of all matter and light, in astronomical settings, their enormous gravity alters the motion of surrounding objects and pulls nearby gas inwards at near-light speed, making the area around black holes the brightest objects in the universe. Some black holes have relativistic jets—thin streams of plasma travelling away from the black hole at more than one-tenth of the speed of light. A small faction of the matter falling towards the black hole gets accelerated away along the hole rotation axis. These jets can extend as far as millions of parsecs from the black hole itself. Black holes of any mass can have jets. However, they are typically observed around spinning black holes with strongly-magnetized accretion disks. Relativistic jets were more common in the early universe, when galaxies and their corresponding supermassive black holes were rapidly gaining mass. All black holes with jets also have an accretion disk, but the jets are usually brighter than the disk. Quasars, typically found in other galaxies, are believed to be supermassive black holes with jets; microquasars are believed to be stellar-mass objects with jets, typically observed in the Milky Way. The mechanism of formation of jets is not yet known, but several options have been proposed. One method proposed to fuel these jets is the Blandford-Znajek process, which suggests that the dragging of magnetic field lines by a black hole's rotation could launch jets of matter into space. The Penrose process, which involves extraction of a black hole's rotational energy, has also been proposed as a potential mechanism of jet propulsion. Due to conservation of angular momentum, gas falling into the gravitational well created by a massive object will typically form a disk-like structure around the object.: 242 As the disk's angular momentum is transferred outward due to internal processes, its matter falls farther inward, converting its gravitational energy into heat and releasing a large flux of x-rays. The temperature of these disks can range from thousands to millions of Kelvin, and temperatures can differ throughout a single accretion disk. Accretion disks can also emit in other parts of the electromagnetic spectrum, depending on the disk's turbulence and magnetization and the black hole's mass and angular momentum. Accretion disks can be defined as geometrically thin or geometrically thick. Geometrically thin disks are mostly confined to the black hole's equatorial plane and have a well-defined edge at the innermost stable circular orbit (ISCO), while geometrically thick disks are supported by internal pressure and temperature and can extend inside the ISCO. Disks with high rates of electron scattering and absorption, appearing bright and opaque, are called optically thick; optically thin disks are more translucent and produce fainter images when viewed from afar. Accretion disks of black holes accreting beyond the Eddington limit are often referred to as polish donuts due to their thick, toroidal shape that resembles that of a donut. Quasar accretion disks are expected to usually appear blue in color. The disk for a stellar black hole, on the other hand, would likely look orange, yellow, or red, with its inner regions being the brightest. Theoretical research suggests that the hotter a disk is, the bluer it should be, although this is not always supported by observations of real astronomical objects. Accretion disk colors may also be altered by the Doppler effect, with the part of the disk travelling towards an observer appearing bluer and brighter and the part of the disk travelling away from the observer appearing redder and dimmer. In Newtonian gravity, test particles can stably orbit at arbitrary distances from a central object. In general relativity, however, there exists a smallest possible radius for which a massive particle can orbit stably. Any infinitesimal inward perturbations to this orbit will lead to the particle spiraling into the black hole, and any outward perturbations will, depending on the energy, cause the particle to spiral in, move to a stable orbit further from the black hole, or escape to infinity. This orbit is called the innermost stable circular orbit, or ISCO. The location of the ISCO depends on the spin of the black hole and the spin of the particle itself. In the case of a Schwarzschild black hole (spin zero) and a particle without spin, the location of the ISCO is: r I S C O = 3 r s = 6 G M c 2 , {\displaystyle r_{\rm {ISCO}}=3\,r_{\text{s}}={\frac {6\,GM}{c^{2}}},} where r I S C O {\displaystyle r_{\rm {_{ISCO}}}} is the radius of the ISCO, r s {\displaystyle r_{\text{s}}} is the Schwarzschild radius of the black hole, G {\displaystyle G} is the gravitational constant, and c {\displaystyle c} is the speed of light. The radius of this orbit changes slightly based on particle spin. For charged black holes, the ISCO moves inwards. For spinning black holes, the ISCO is moved inwards for particles orbiting in the same direction that the black hole is spinning (prograde) and outwards for particles orbiting in the opposite direction (retrograde). For example, the ISCO for a particle orbiting retrograde can be as far out as about 9 r s {\displaystyle 9r_{\text{s}}} , while the ISCO for a particle orbiting prograde can be as close as at the event horizon itself. The photon sphere is a spherical boundary for which photons moving on tangents to that sphere are bent completely around the black hole, possibly orbiting multiple times. Light rays with impact parameters less than the radius of the photon sphere enter the black hole. For Schwarzschild black holes, the photon sphere has a radius 1.5 times the Schwarzschild radius; the radius for non-Schwarzschild black holes is at least 1.5 times the radius of the event horizon. When viewed from a great distance, the photon sphere creates an observable black hole shadow. Since no light emerges from within the black hole, this shadow is the limit for possible observations.: 152 The shadow of colliding black holes should have characteristic warped shapes, allowing scientists to detect black holes that are about to merge. While light can still escape from the photon sphere, any light that crosses the photon sphere on an inbound trajectory will be captured by the black hole. Therefore, any light that reaches an outside observer from the photon sphere must have been emitted by objects between the photon sphere and the event horizon. Light emitted towards the photon sphere may also curve around the black hole and return to the emitter. For a rotating, uncharged black hole, the radius of the photon sphere depends on the spin parameter and whether the photon is orbiting prograde or retrograde. For a photon orbiting prograde, the photon sphere will be 1-3 Schwarzschild radii from the center of the black hole, while for a photon orbiting retrograde, the photon sphere will be between 3-5 Schwarzschild radii from the center of the black hole. The exact location of the photon sphere depends on the magnitude of the black hole's rotation. For a charged, nonrotating black hole, there will only be one photon sphere, and the radius of the photon sphere will decrease for increasing black hole charge. For non-extremal, charged, rotating black holes, there will always be two photon spheres, with the exact radii depending on the parameters of the black hole. Near a rotating black hole, spacetime rotates similar to a vortex. The rotating spacetime will drag any matter and light into rotation around the spinning black hole. This effect of general relativity, called frame dragging, gets stronger closer to the spinning mass. The region of spacetime in which it is impossible to stay still is called the ergosphere. The ergosphere of a black hole is a volume bounded by the black hole's event horizon and the ergosurface, which coincides with the event horizon at the poles but bulges out from it around the equator. Matter and radiation can escape from the ergosphere. Through the Penrose process, objects can emerge from the ergosphere with more energy than they entered with. The extra energy is taken from the rotational energy of the black hole, slowing down the rotation of the black hole.: 268 A variation of the Penrose process in the presence of strong magnetic fields, the Blandford–Znajek process, is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei. The observable region of spacetime around a black hole closest to its event horizon is called the plunging region. In this area it is no longer possible for free falling matter to follow circular orbits or stop a final descent into the black hole. Instead, it will rapidly plunge toward the black hole at close to the speed of light, growing increasingly hot and producing a characteristic, detectable thermal emission. However, light and radiation emitted from this region can still escape from the black hole's gravitational pull. For a nonspinning, uncharged black hole, the radius of the event horizon, or Schwarzschild radius, is proportional to the mass, M, through r s = 2 G M c 2 ≈ 2.95 M M ⊙ k m , {\displaystyle r_{\mathrm {s} }={\frac {2GM}{c^{2}}}\approx 2.95\,{\frac {M}{M_{\odot }}}~\mathrm {km,} } where rs is the Schwarzschild radius and M☉ is the mass of the Sun.: 124 For a black hole with nonzero spin or electric charge, the radius is smaller,[Note 1] until an extremal black hole could have an event horizon close to r + = G M c 2 , {\displaystyle r_{\mathrm {+} }={\frac {GM}{c^{2}}},} half the radius of a nonspinning, uncharged black hole of the same mass. Since the volume within the Schwarzschild radius increase with the cube of the radius, average density of a black hole inside its Schwarzschild radius is inversely proportional to the square of its mass: supermassive black holes are much less dense than stellar black holes. The average density of a 108 M☉ black hole is comparable to that of water. The defining feature of a black hole is the existence of an event horizon, a boundary in spacetime through which matter and light can pass only inward towards the center of the black hole. Nothing, not even light, can escape from inside the event horizon. The event horizon is referred to as such because if an event occurs within the boundary, information from that event cannot reach or affect an outside observer, making it impossible to determine whether such an event occurred.: 179 For non-rotating black holes, the geometry of the event horizon is precisely spherical, while for rotating black holes, the event horizon is oblate. To a distant observer, a clock near a black hole would appear to tick more slowly than one further from the black hole.: 217 This effect, known as gravitational time dilation, would also cause an object falling into a black hole to appear to slow as it approached the event horizon, never quite reaching the horizon from the perspective of an outside observer.: 218 All processes on this object would appear to slow down, and any light emitted by the object to appear redder and dimmer, an effect known as gravitational redshift. An object falling from half of a Schwarzschild radius above the event horizon would fade away until it could no longer be seen, disappearing from view within one hundredth of a second. It would also appear to flatten onto the black hole, joining all other material that had ever fallen into the hole. On the other hand, an observer falling into a black hole would not notice any of these effects as they cross the event horizon. Their own clocks appear to them to tick normally, and they cross the event horizon after a finite time without noting any singular behaviour. In general relativity, it is impossible to determine the location of the event horizon from local observations, due to Einstein's equivalence principle.: 222 Black holes that are rotating and/or charged have an inner horizon, often called the Cauchy horizon, inside of the black hole. The inner horizon is divided up into two segments: an ingoing section and an outgoing section. At the ingoing section of the Cauchy horizon, radiation and matter that fall into the black hole would build up at the horizon, causing the curvature of spacetime to go to infinity. This would cause an observer falling in to experience tidal forces. This phenomenon is often called mass inflation, since it is associated with a parameter dictating the black hole's internal mass growing exponentially, and the buildup of tidal forces is called the mass-inflation singularity or Cauchy horizon singularity. Some physicists have argued that in realistic black holes, accretion and Hawking radiation would stop mass inflation from occurring. At the outgoing section of the inner horizon, infalling radiation would backscatter off of the black hole's spacetime curvature and travel outward, building up at the outgoing Cauchy horizon. This would cause an infalling observer to experience a gravitational shock wave and tidal forces as the spacetime curvature at the horizon grew to infinity. This buildup of tidal forces is called the shock singularity. Both of these singularities are weak, meaning that an object crossing them would only be deformed a finite amount by tidal forces, even though the spacetime curvature would still be infinite at the singularity. This is as opposed to a strong singularity, where an object hitting the singularity would be stretched and squeezed by an infinite amount. They are also null singularities, meaning that a photon could travel parallel to the them without ever being intercepted. Ignoring quantum effects, every black hole has a singularity inside, points where the curvature of spacetime becomes infinite, and geodesics terminate within a finite proper time.: 205 For a non-rotating black hole, this region takes the shape of a single point; for a rotating black hole it is smeared out to form a ring singularity that lies in the plane of rotation.: 264 In both cases, the singular region has zero volume. All of the mass of the black hole ends up in the singularity.: 252 Since the singularity has nonzero mass in an infinitely small space, it can be thought of as having infinite density. Observers falling into a Schwarzschild black hole (i.e., non-rotating and not charged) cannot avoid being carried into the singularity once they cross the event horizon. As they fall further into the black hole, they will be torn apart by the growing tidal forces in a process sometimes referred to as spaghettification or the noodle effect. Eventually, they will reach the singularity and be crushed into an infinitely small point.: 182 However any perturbations, such as those caused by matter or radiation falling in, would cause space to oscillate chaotically near the singularity. Any matter falling in would experience intense tidal forces rapidly changing in direction, all while being compressed into an increasingly small volume. Alternative forms of general relativity, including addition of some quatum effects, can lead to regular, or nonsingular, black holes without singularities. For example, the fuzzball model, based on string theory, states that black holes are actually made up of quantum microstates and need not have a singularity or an event horizon. The theory of loop quantum gravity proposes that the curvature and density at the center of a black hole is large, but not infinite. Formation Black holes are formed by gravitational collapse of massive stars, either by direct collapse or during a supernova explosion in a process called fallback. Black holes can result from the merger of two neutron stars or a neutron star and a black hole. Other more speculative mechanisms include primordial black holes created from density fluctuations in the early universe, the collapse of dark stars, a hypothetical object powered by annihilation of dark matter, or from hypothetical self-interacting dark matter. Gravitational collapse occurs when an object's internal pressure is insufficient to resist the object's own gravity. At the end of a star's life, it will run out of hydrogen to fuse, and will start fusing more and more massive elements, until it gets to iron. Since the fusion of elements heavier than iron would require more energy than it would release, nuclear fusion ceases. If the iron core of the star is too massive, the star will no longer be able to support itself and will undergo gravitational collapse. While most of the energy released during gravitational collapse is emitted very quickly, an outside observer does not actually see the end of this process. Even though the collapse takes a finite amount of time from the reference frame of infalling matter, a distant observer would see the infalling material slow and halt just above the event horizon, due to gravitational time dilation. Light from the collapsing material takes longer and longer to reach the observer, with the delay growing to infinity as the emitting material reaches the event horizon. Thus the external observer never sees the formation of the event horizon; instead, the collapsing material seems to become dimmer and increasingly red-shifted, eventually fading away. Observations of quasars at redshift z ∼ 7 {\displaystyle z\sim 7} , less than a billion years after the Big Bang, has led to investigations of other ways to form black holes. The accretion process to build supermassive black holes has a limiting rate of mass accumulation and a billion years is not enough time to reach quasar status. One suggestion is direct collapse of nearly pure hydrogen gas (low metalicity) clouds characteristic of the young universe, forming a supermassive star which collapses into a black hole. It has been suggested that seed black holes with typical masses of ~105 M☉ could have formed in this way which then could grow to ~109 M☉. However, the very large amount of gas required for direct collapse is not typically stable to fragmentation to form multiple stars. Thus another approach suggests massive star formation followed by collisions that seed massive black holes which ultimately merge to create a quasar.: 85 A neutron star in a common envelope with a regular star can accrete sufficient material to collapse to a black hole or two neutron stars can merge. These avenues for the formation of black holes are considered relatively rare. In the current epoch of the universe, conditions needed to form black holes are rare and are mostly only found in stars. However, in the early universe, conditions may have allowed for black hole formations via other means. Fluctuations of spacetime soon after the Big Bang may have formed areas that were denser then their surroundings. Initially, these regions would not have been compact enough to form a black hole, but eventually, the curvature of spacetime in the regions become large enough to cause them to collapse into a black hole. Different models for the early universe vary widely in their predictions of the scale of these fluctuations. Various models predict the creation of primordial black holes ranging from a Planck mass (~2.2×10−8 kg) to hundreds of thousands of solar masses. Primordial black holes with masses less than 1015 g would have evaporated by now due to Hawking radiation. Despite the early universe being extremely dense, it did not re-collapse into a black hole during the Big Bang, since the universe was expanding rapidly and did not have the gravitational differential necessary for black hole formation. Models for the gravitational collapse of objects of relatively constant size, such as stars, do not necessarily apply in the same way to rapidly expanding space such as the Big Bang. In principle, black holes could be formed in high-energy particle collisions that achieve sufficient density, although no such events have been detected. These hypothetical micro black holes, which could form from the collision of cosmic rays and Earth's atmosphere or in particle accelerators like the Large Hadron Collider, would not be able to aggregate additional mass. Instead, they would evaporate in about 10−25 seconds, posing no threat to the Earth. Evolution Black holes can also merge with other objects such as stars or even other black holes. This is thought to have been important, especially in the early growth of supermassive black holes, which could have formed from the aggregation of many smaller objects. The process has also been proposed as the origin of some intermediate-mass black holes. Mergers of supermassive black holes may take a long time: As a binary of supermassive black holes approach each other, most nearby stars are ejected, leaving little for the remaining black holes to gravitationally interact with that would allow them to get closer to each other. This phenomenon has been called the final parsec problem, as the distance at which this happens is usually around one parsec. When a black hole accretes matter, the gas in the inner accretion disk orbits at very high speeds because of its proximity to the black hole. The resulting friction heats the inner disk to temperatures at which it emits vast amounts of electromagnetic radiation (mainly X-rays) detectable by telescopes. By the time the matter of the disk reaches the ISCO, between 5.7% and 42% of its mass will have been converted to energy, depending on the black hole's spin. About 90% of this energy is released within about 20 black hole radii. In many cases, accretion disks are accompanied by relativistic jets that are emitted along the black hole's poles, which carry away much of the energy. The mechanism for the creation of these jets is currently not well understood, in part due to insufficient data. Many of the universe's most energetic phenomena have been attributed to the accretion of matter on black holes. Active galactic nuclei and quasars are believed to be the accretion disks of supermassive black holes. X-ray binaries are generally accepted to be binary systems in which one of the two objects is a compact object accreting matter from its companion. Ultraluminous X-ray sources may be the accretion disks of intermediate-mass black holes. At a certain rate of accretion, the outward radiation pressure will become as strong as the inward gravitational force, and the black hole should unable to accrete any faster. This limit is called the Eddington limit. However, many black holes accrete beyond this rate due to their non-spherical geometry or instabilities in the accretion disk. Accretion beyond the limit is called Super-Eddington accretion and may have been commonplace in the early universe. Stars have been observed to get torn apart by tidal forces in the immediate vicinity of supermassive black holes in galaxy nuclei, in what is known as a tidal disruption event (TDE). Some of the material from the disrupted star forms an accretion disk around the black hole, which emits observable electromagnetic radiation. The correlation between the masses of supermassive black holes in the centres of galaxies with the velocity dispersion and mass of stars in their host bulges suggests that the formation of galaxies and the formation of their central black holes are related. Black hole winds from rapid accretion, particularly when the galaxy itself is still accreting matter, can compress gas nearby, accelerating star formation. However, if the winds become too strong, the black hole may blow nearly all of the gas out of the galaxy, quenching star formation. Black hole jets may also energize nearby cavities of plasma and eject low-entropy gas from out of the galactic core, causing gas in galactic centers to be hotter than expected. If Hawking's theory of black hole radiation is correct, then black holes are expected to shrink and evaporate over time as they lose mass by the emission of photons and other particles. The temperature of this thermal spectrum (Hawking temperature) is proportional to the surface gravity of the black hole, which is inversely proportional to the mass. Hence, large black holes emit less radiation than small black holes.: Ch. 9.6 A stellar black hole of 1 M☉ has a Hawking temperature of 62 nanokelvins. This is far less than the 2.7 K temperature of the cosmic microwave background radiation. Stellar-mass or larger black holes receive more mass from the cosmic microwave background than they emit through Hawking radiation and thus will grow instead of shrinking. To have a Hawking temperature larger than 2.7 K (and be able to evaporate), a black hole would need a mass less than the Moon. Such a black hole would have a diameter of less than a tenth of a millimetre. The Hawking radiation for an astrophysical black hole is predicted to be very weak and would thus be exceedingly difficult to detect from Earth. A possible exception is the burst of gamma rays emitted in the last stage of the evaporation of primordial black holes. Searches for such flashes have proven unsuccessful and provide stringent limits on the possibility of existence of low mass primordial black holes, with modern research predicting that primordial black holes must make up less than a fraction of 10−7 of the universe's total mass. NASA's Fermi Gamma-ray Space Telescope, launched in 2008, has searched for these flashes, but has not yet found any. The properties of a black hole are constrained and interrelated by the theories that predict these properties. When based on general relativity, these relationships are called the laws of black hole mechanics. For a black hole that is not still forming or accreting matter, the zeroth law of black hole mechanics states the black hole's surface gravity is constant across the event horizon. The first law relates changes in the black hole's surface area, angular momentum, and charge to changes in its energy. The second law says the surface area of a black hole never decreases on its own. Finally, the third law says that the surface gravity of a black hole is never zero. These laws are mathematical analogs of the laws of thermodynamics. They are not equivalent, however, because, according to general relativity without quantum mechanics, a black hole can never emit radiation, and thus its temperature must always be zero.: 11 Quantum mechanics predicts that a black hole will continuously emit thermal Hawking radiation, and therefore must always have a nonzero temperature. It also predicts that all black holes have entropy which scales with their surface area. When quantum mechanics is accounted for, the laws of black hole mechanics become equivalent to the classical laws of thermodynamics. However, these conclusions are derived without a complete theory of quantum gravity, although many potential theories do predict black holes having entropy and temperature. Thus, the true quantum nature of black hole thermodynamics continues to be debated.: 29 Observational evidence Millions of black holes with around 30 solar masses derived from stellar collapse are expected to exist in the Milky Way. Even a dwarf galaxy like Draco should have hundreds. Only a few of these have been detected. By nature, black holes do not themselves emit any electromagnetic radiation other than the hypothetical Hawking radiation, so astrophysicists searching for black holes must generally rely on indirect observations. The defining characteristic of a black hole is its event horizon. The horizon itself cannot be imaged, so all other possible explanations for these indirect observations must be considered and eliminated before concluding that a black hole has been observed.: 11 The Event Horizon Telescope (EHT) is a global system of radio telescopes capable of directly observing a black hole shadow. The angular resolution of a telescope is based on its aperture and the wavelengths it is observing. Because the angular diameters of Sagittarius A* and Messier 87* in the sky are very small, a single telescope would need to be about the size of the Earth to clearly distinguish their horizons using radio wavelengths. By combining data from several different radio telescopes around the world, the Event Horizon Telescope creates an effective aperture the diameter size of the Earth. The EHT team used imaging algorithms to compute the most probable image from the data in its observations of Sagittarius A* and M87*. Gravitational-wave interferometry can be used to detect merging black holes and other compact objects. In this method, a laser beam is split down two long arms of a tunnel. The laser beams reflect off of mirrors in the tunnels and converge at the intersection of the arms, cancelling each other out. However, when a gravitational wave passes, it warps spacetime, changing the lengths of the arms themselves. Since each laser beam is now travelling a slightly different distance, they do not cancel out and produce a recognizable signal. Analysis of the signal can give scientists information about what caused the gravitational waves. Since gravitational waves are very weak, gravitational-wave observatories such as LIGO must have arms several kilometers long and carefully control for noise from Earth to be able to detect these gravitational waves. Since the first measurements in 2016, multiple gravitational waves from black holes have been detected and analyzed. The proper motions of stars near the centre of the Milky Way provide strong observational evidence that these stars are orbiting a supermassive black hole. Since 1995, astronomers have tracked the motions of 90 stars orbiting an invisible object coincident with the radio source Sagittarius A*. In 1998, by fitting the motions of the stars to Keplerian orbits, the astronomers were able to infer that Sagittarius A* must be a 2.6×106 M☉ object must be contained within a radius of 0.02 light-years. Since then, one of the stars—called S2—has completed a full orbit. From the orbital data, astronomers were able to refine the calculations of the mass of Sagittarius A* to 4.3×106 M☉, with a radius of less than 0.002 light-years. This upper limit radius is larger than the Schwarzschild radius for the estimated mass, so the combination does not prove Sagittarius A* is a black hole. Nevertheless, these observations strongly suggest that the central object is a supermassive black hole as there are no other plausible scenarios for confining so much invisible mass into such a small volume. Additionally, there is some observational evidence that this object might possess an event horizon, a feature unique to black holes. The Event Horizon Telescope image of Sagittarius A*, released in 2022, provided further confirmation that it is indeed a black hole. X-ray binaries are binary systems that emit a majority of their radiation in the X-ray part of the electromagnetic spectrum. These X-ray emissions result when a compact object accretes matter from an ordinary star. The presence of an ordinary star in such a system provides an opportunity for studying the central object and to determine if it might be a black hole. By measuring the orbital period of the binary, the distance to the binary from Earth, and the mass of the companion star, scientists can estimate the mass of the compact object. The Tolman-Oppenheimer-Volkoff limit (TOV limit) dictates the largest mass a nonrotating neutron star can be, and is estimated to be about two solar masses. While a rotating neutron star can be slightly more massive, if the compact object is much more massive than the TOV limit, it cannot be a neutron star and is generally expected to be a black hole. The first strong candidate for a black hole, Cygnus X-1, was discovered in this way by Charles Thomas Bolton, Louise Webster, and Paul Murdin in 1972. Observations of rotation broadening of the optical star reported in 1986 lead to a compact object mass estimate of 16 solar masses, with 7 solar masses as the lower bound. In 2011, this estimate was updated to 14.1±1.0 M☉ for the black hole and 19.2±1.9 M☉ for the optical stellar companion. X-ray binaries can be categorized as either low-mass or high-mass; This classification is based on the mass of the companion star, not the compact object itself. In a class of X-ray binaries called soft X-ray transients, the companion star is of relatively low mass, allowing for more accurate estimates of the black hole mass. These systems actively emit X-rays for only several months once every 10–50 years. During the period of low X-ray emission, called quiescence, the accretion disk is extremely faint, allowing detailed observation of the companion star. Numerous black hole candidates have been measured by this method. Black holes are also sometimes found in binaries with other compact objects, such as white dwarfs, neutron stars, and other black holes. The centre of nearly every galaxy contains a supermassive black hole. The close observational correlation between the mass of this hole and the velocity dispersion of the host galaxy's bulge, known as the M–sigma relation, strongly suggests a connection between the formation of the black hole and that of the galaxy itself. Astronomers use the term active galaxy to describe galaxies with unusual characteristics, such as unusual spectral line emission and very strong radio emission. Theoretical and observational studies have shown that the high levels of activity in the centers of these galaxies, regions called active galactic nuclei (AGN), may be explained by accretion onto supermassive black holes. These AGN consist of a central black hole that may be millions or billions of times more massive than the Sun, a disk of interstellar gas and dust called an accretion disk, and two jets perpendicular to the accretion disk. Although supermassive black holes are expected to be found in most AGN, only some galaxies' nuclei have been more carefully studied in attempts to both identify and measure the actual masses of the central supermassive black hole candidates. Some of the most notable galaxies with supermassive black hole candidates include the Andromeda Galaxy, Messier 32, Messier 87, the Sombrero Galaxy, and the Milky Way itself. Another way black holes can be detected is through observation of effects caused by their strong gravitational field. One such effect is gravitational lensing: The deformation of spacetime around a massive object causes light rays to be deflected, making objects behind them appear distorted. When the lensing object is a black hole, this effect can be strong enough to create multiple images of a star or other luminous source. However, the distance between the lensed images may be too small for contemporary telescopes to resolve—this phenomenon is called microlensing. Instead of seeing two images of a lensed star, astronomers see the star brighten slightly as the black hole moves towards the line of sight between the star and Earth and then return to its normal luminosity as the black hole moves away. The turn of the millennium saw the first 3 candidate detections of black holes in this way, and in January 2022, astronomers reported the first confirmed detection of a microlensing event from an isolated black hole. This was also the first determination of an isolated black hole mass, 7.1±1.3 M☉. Alternatives While there is a strong case for supermassive black holes, the model for stellar-mass black holes assumes of an upper limit for the mass of a neutron star: objects observed to have more mass are assumed to be black holes. However, the properties of extremely dense matter are poorly understood. New exotic phases of matter could allow other kinds of massive objects. Quark stars would be made up of quark matter and supported by quark degeneracy pressure, a form of degeneracy pressure even stronger than neutron degeneracy pressure. This would halt gravitational collapse at a higher mass than for a neutron star. Even stronger stars called electroweak stars would convert quarks in their cores into leptons, providing additional pressure to stop the star from collapsing. If, as some extensions of the Standard Model posit, quarks and leptons are made up of the even-smaller fundamental particles called preons, a very compact star could be supported by preon degeneracy pressure. While none of these hypothetical models can explain all of the observations of stellar black hole candidates, a Q star is the only alternative which could significantly exceed the mass limit for neutron stars and thus provide an alternative for supermassive black holes.: 12 A few theoretical objects have been conjectured to match observations of astronomical black hole candidates identically or near-identically, but which function via a different mechanism. A dark energy star would convert infalling matter into vacuum energy; This vacuum energy would be much larger than the vacuum energy of outside space, exerting outwards pressure and preventing a singularity from forming. A black star would be gravitationally collapsing slowly enough that quantum effects would keep it just on the cusp of fully collapsing into a black hole. A gravastar would consist of a very thin shell and a dark-energy interior providing outward pressure to stop the collapse into a black hole or formation of a singularity; It could even have another gravastar inside, called a 'nestar'. Open questions According to the no-hair theorem, a black hole is defined by only three parameters: its mass, charge, and angular momentum. This seems to mean that all other information about the matter that went into forming the black hole is lost, as there is no way to determine anything about the black hole from outside other than those three parameters. When black holes were thought to persist forever, this information loss was not problematic, as the information can be thought of as existing inside the black hole. However, black holes slowly evaporate by emitting Hawking radiation. This radiation does not appear to carry any additional information about the matter that formed the black hole, meaning that this information is seemingly gone forever. This is called the black hole information paradox. Theoretical studies analyzing the paradox have led to both further paradoxes and new ideas about the intersection of quantum mechanics and general relativity. While there is no consensus on the resolution of the paradox, work on the problem is expected to be important for a theory of quantum gravity.: 126 Observations of faraway galaxies have found that ultraluminous quasars, powered by supermassive black holes, existed in the early universe as far as redshift z ≥ 7 {\displaystyle z\geq 7} . These black holes have been assumed to be the products of the gravitational collapse of large population III stars. However, these stellar remnants were not massive enough to produce the quasars observed at early times without accreting beyond the Eddington limit, the theoretical maximum rate of black hole accretion. Physicists have suggested a variety of different mechanisms by which these supermassive black holes may have formed. It has been proposed that smaller black holes may have also undergone mergers to produce the observed supermassive black holes. It is also possible that they were seeded by direct-collapse black holes, in which a large cloud of hot gas avoids fragmentation that would lead to multiple stars, due to low angular momentum or heating from a nearby galaxy. Given the right circumstances, a single supermassive star forms and collapses directly into a black hole without undergoing typical stellar evolution. Additionally, these supermassive black holes in the early universe may be high-mass primordial black holes, which could have accreted further matter in the centers of galaxies. Finally, certain mechanisms allow black holes to grow faster than the theoretical Eddington limit, such as dense gas in the accretion disk limiting outward radiation pressure that prevents the black hole from accreting. However, the formation of bipolar jets prevent super-Eddington rates. In fiction Black holes have been portrayed in science fiction in a variety of ways. Even before the advent of the term itself, objects with characteristics of black holes appeared in stories such as the 1928 novel The Skylark of Space with its "black Sun" and the "hole in space" in the 1935 short story Starship Invincible. As black holes grew to public recognition in the 1960s and 1970s, they began to be featured in films as well as novels, such as Disney's The Black Hole. Black holes have also been used in works of the 21st century, such as Christopher Nolan's science fiction epic Interstellar. Authors and screenwriters have exploited the relativistic effects of black holes, particularly gravitational time dilation. For example, Interstellar features a black hole planet with a time dilation factor of over 60,000:1, while the 1977 novel Gateway depicts a spaceship approaching but never crossing the event horizon of a black hole from the perspective of an outside observer due to time dilation effects. Black holes have also been appropriated as wormholes or other methods of faster-than-light travel, such as in the 1974 novel The Forever War, where a network of black holes is used for interstellar travel. Additionally, black holes can feature as hazards to spacefarers and planets: A black hole threatens a deep-space outpost in 1978 short story The Black Hole Passes, and a binary black hole dangerously alters the orbit of a planet in the 2018 Netflix reboot of Lost in Space. Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/United_States#cite_note-294] | [TOKENS: 17273]
Contents United States The United States of America (USA), also known as the United States (U.S.) or America, is a country primarily located in North America. It is a federal republic of 50 states and a federal capital district, Washington, D.C. The 48 contiguous states border Canada to the north and Mexico to the south, with the semi-exclave of Alaska in the northwest and the archipelago of Hawaii in the Pacific Ocean. The United States also asserts sovereignty over five major island territories and various uninhabited islands in Oceania and the Caribbean.[j] It is a megadiverse country, with the world's third-largest land area[c] and third-largest population, exceeding 341 million.[k] Paleo-Indians first migrated from North Asia to North America at least 15,000 years ago, and formed various civilizations. Spanish colonization established Spanish Florida in 1513, the first European colony in what is now the continental United States. British colonization followed with the 1607 settlement of Virginia, the first of the Thirteen Colonies. Enslavement of Africans was practiced in all colonies by 1770 and supplied most of the labor for the Southern Colonies' plantation economy. Clashes with the British Crown began as a civil protest over the illegality of taxation without representation in Parliament and the denial of other English rights. They evolved into the American Revolution, which led to the Declaration of Independence and a society based on universal rights. Victory in the 1775–1783 Revolutionary War brought international recognition of U.S. sovereignty and fueled westward expansion, further dispossessing native inhabitants. As more states were admitted, a North–South division over slavery led the Confederate States of America to declare secession and fight the Union in the 1861–1865 American Civil War. With the United States' victory and reunification, slavery was abolished nationally. By the late 19th century, the U.S. economy outpaced the French, German and British economies combined. As of 1900, the country had established itself as a great power, a status solidified after its involvement in World War I. Following Japan's attack on Pearl Harbor in 1941, the U.S. entered World War II. Its aftermath left the U.S. and the Soviet Union as rival superpowers, competing for ideological dominance and international influence during the Cold War. The Soviet Union's collapse in 1991 ended the Cold War, leaving the U.S. as the world's sole superpower. The U.S. federal government is a representative democracy with a president and a constitution that grants separation of powers under three branches: legislative, executive, and judicial. The United States Congress is a bicameral national legislature composed of the House of Representatives (a lower house based on population) and the Senate (an upper house based on equal representation for each state). Federalism grants substantial autonomy to the 50 states. In addition, 574 Native American tribes have sovereignty rights, and there are 326 Native American reservations. Since the 1850s, the Democratic and Republican parties have dominated American politics. American ideals and values are based on a democratic tradition inspired by the American Enlightenment movement. A developed country, the U.S. ranks high in economic competitiveness, innovation, and higher education. Accounting for over a quarter of nominal global GDP, its economy has been the world's largest since about 1890. It is the wealthiest country, with the highest disposable household income per capita among OECD members, though its wealth inequality is highly pronounced. Shaped by centuries of immigration, the culture of the U.S. is diverse and globally influential. Making up more than a third of global military spending, the country has one of the strongest armed forces and is a designated nuclear state. A member of numerous international organizations, the U.S. plays a major role in global political, cultural, economic, and military affairs. Etymology Documented use of the phrase "United States of America" dates back to January 2, 1776. On that day, Stephen Moylan, a Continental Army aide to General George Washington, wrote a letter to Joseph Reed, Washington's aide-de-camp, seeking to go "with full and ample powers from the United States of America to Spain" to seek assistance in the Revolutionary War effort. The first known public usage is an anonymous essay published in the Williamsburg newspaper The Virginia Gazette on April 6, 1776. Sometime on or after June 11, 1776, Thomas Jefferson wrote "United States of America" in a rough draft of the Declaration of Independence, which was adopted by the Second Continental Congress on July 4, 1776. The term "United States" and its initialism "U.S.", used as nouns or as adjectives in English, are common short names for the country. The initialism "USA", a noun, is also common. "United States" and "U.S." are the established terms throughout the U.S. federal government, with prescribed rules.[l] "The States" is an established colloquial shortening of the name, used particularly from abroad; "stateside" is the corresponding adjective or adverb. "America" is the feminine form of the first word of Americus Vesputius, the Latinized name of Italian explorer Amerigo Vespucci (1454–1512);[m] it was first used as a place name by the German cartographers Martin Waldseemüller and Matthias Ringmann in 1507.[n] Vespucci first proposed that the West Indies discovered by Christopher Columbus in 1492 were part of a previously unknown landmass and not among the Indies at the eastern limit of Asia. In English, the term "America" usually does not refer to topics unrelated to the United States, despite the usage of "the Americas" to describe the totality of the continents of North and South America. History The first inhabitants of North America migrated from Siberia approximately 15,000 years ago, either across the Bering land bridge or along the now-submerged Ice Age coastline. Small isolated groups of hunter-gatherers are said to have migrated alongside herds of large herbivores far into Alaska, with ice-free corridors developing along the Pacific coast and valleys of North America in c. 16,500 – c. 13,500 BCE (c. 18,500 – c. 15,500 BP). The Clovis culture, which appeared around 11,000 BCE, is believed to be the first widespread culture in the Americas. Over time, Indigenous North American cultures grew increasingly sophisticated, and some, such as the Mississippian culture, developed agriculture, architecture, and complex societies. In the post-archaic period, the Mississippian cultures were located in the midwestern, eastern, and southern regions, and the Algonquian in the Great Lakes region and along the Eastern Seaboard, while the Hohokam culture and Ancestral Puebloans inhabited the Southwest. Native population estimates of what is now the United States before the arrival of European colonizers range from around 500,000 to nearly 10 million. Christopher Columbus began exploring the Caribbean for Spain in 1492, leading to Spanish-speaking settlements and missions from what are now Puerto Rico and Florida to New Mexico and California. The first Spanish colony in the present-day continental United States was Spanish Florida, chartered in 1513. After several settlements failed there due to starvation and disease, Spain's first permanent town, Saint Augustine, was founded in 1565. France established its own settlements in French Florida in 1562, but they were either abandoned (Charlesfort, 1578) or destroyed by Spanish raids (Fort Caroline, 1565). Permanent French settlements were founded much later along the Great Lakes (Fort Detroit, 1701), the Mississippi River (Saint Louis, 1764) and especially the Gulf of Mexico (New Orleans, 1718). Early European colonies also included the thriving Dutch colony of New Nederland (settled 1626, present-day New York) and the small Swedish colony of New Sweden (settled 1638 in what became Delaware). British colonization of the East Coast began with the Virginia Colony (1607) and the Plymouth Colony (Massachusetts, 1620). The Mayflower Compact in Massachusetts and the Fundamental Orders of Connecticut established precedents for local representative self-governance and constitutionalism that would develop throughout the American colonies. While European settlers in what is now the United States experienced conflicts with Native Americans, they also engaged in trade, exchanging European tools for food and animal pelts.[o] Relations ranged from close cooperation to warfare and massacres. The colonial authorities often pursued policies that forced Native Americans to adopt European lifestyles, including conversion to Christianity. Along the eastern seaboard, settlers trafficked Africans through the Atlantic slave trade, largely to provide manual labor on plantations. The original Thirteen Colonies[p] that would later found the United States were administered as possessions of the British Empire by Crown-appointed governors, though local governments held elections open to most white male property owners. The colonial population grew rapidly from Maine to Georgia, eclipsing Native American populations; by the 1770s, the natural increase of the population was such that only a small minority of Americans had been born overseas. The colonies' distance from Britain facilitated the entrenchment of self-governance, and the First Great Awakening, a series of Christian revivals, fueled colonial interest in guaranteed religious liberty. Following its victory in the French and Indian War, Britain began to assert greater control over local affairs in the Thirteen Colonies, resulting in growing political resistance. One of the primary grievances of the colonists was the denial of their rights as Englishmen, particularly the right to representation in the British government that taxed them. To demonstrate their dissatisfaction and resolve, the First Continental Congress met in 1774 and passed the Continental Association, a colonial boycott of British goods enforced by local "committees of safety" that proved effective. The British attempt to then disarm the colonists resulted in the 1775 Battles of Lexington and Concord, igniting the American Revolutionary War. At the Second Continental Congress, the colonies appointed George Washington commander-in-chief of the Continental Army, and created a committee that named Thomas Jefferson to draft the Declaration of Independence. Two days after the Second Continental Congress passed the Lee Resolution to create an independent, sovereign nation, the Declaration was adopted on July 4, 1776. The political values of the American Revolution evolved from an armed rebellion demanding reform within an empire to a revolution that created a new social and governing system founded on the defense of liberty and the protection of inalienable natural rights; sovereignty of the people; republicanism over monarchy, aristocracy, and other hereditary political power; civic virtue; and an intolerance of political corruption. The Founding Fathers of the United States, who included Washington, Jefferson, John Adams, Benjamin Franklin, Alexander Hamilton, John Jay, James Madison, Thomas Paine, and many others, were inspired by Classical, Renaissance, and Enlightenment philosophies and ideas. Though in practical effect since its drafting in 1777, the Articles of Confederation was ratified in 1781 and formally established a decentralized government that operated until 1789. After the British surrender at the siege of Yorktown in 1781, American sovereignty was internationally recognized by the Treaty of Paris (1783), through which the U.S. gained territory stretching west to the Mississippi River, north to present-day Canada, and south to Spanish Florida. The Northwest Ordinance (1787) established the precedent by which the country's territory would expand with the admission of new states, rather than the expansion of existing states. The U.S. Constitution was drafted at the 1787 Constitutional Convention to overcome the limitations of the Articles. It went into effect in 1789, creating a federal republic governed by three separate branches that together formed a system of checks and balances. George Washington was elected the country's first president under the Constitution, and the Bill of Rights was adopted in 1791 to allay skeptics' concerns about the power of the more centralized government. His resignation as commander-in-chief after the Revolutionary War and his later refusal to run for a third term as the country's first president established a precedent for the supremacy of civil authority in the United States and the peaceful transfer of power. In the late 18th century, American settlers began to expand westward in larger numbers, many with a sense of manifest destiny. The Louisiana Purchase of 1803 from France nearly doubled the territory of the United States. Lingering issues with Britain remained, leading to the War of 1812, which was fought to a draw. Spain ceded Florida and its Gulf Coast territory in 1819. The Missouri Compromise of 1820, which admitted Missouri as a slave state and Maine as a free state, attempted to balance the desire of northern states to prevent the expansion of slavery into new territories with that of southern states to extend it there. Primarily, the compromise prohibited slavery in all other lands of the Louisiana Purchase north of the 36°30′ parallel. As Americans expanded further into territory inhabited by Native Americans, the federal government implemented policies of Indian removal or assimilation. The most significant such legislation was the Indian Removal Act of 1830, a key policy of President Andrew Jackson. It resulted in the Trail of Tears (1830–1850), in which an estimated 60,000 Native Americans living east of the Mississippi River were forcibly removed and displaced to lands far to the west, causing 13,200 to 16,700 deaths along the forced march. Settler expansion as well as this influx of Indigenous peoples from the East resulted in the American Indian Wars west of the Mississippi. During the colonial period, slavery became legal in all the Thirteen colonies, but by 1770 it provided the main labor force in the large-scale, agriculture-dependent economies of the Southern Colonies from Maryland to Georgia. The practice began to be significantly questioned during the American Revolution, and spurred by an active abolitionist movement that had reemerged in the 1830s, states in the North enacted laws to prohibit slavery within their boundaries. At the same time, support for slavery had strengthened in Southern states, with widespread use of inventions such as the cotton gin (1793) having made slavery immensely profitable for Southern elites. The United States annexed the Republic of Texas in 1845, and the 1846 Oregon Treaty led to U.S. control of the present-day American Northwest. Dispute with Mexico over Texas led to the Mexican–American War (1846–1848). After the victory of the U.S., Mexico recognized U.S. sovereignty over Texas, New Mexico, and California in the 1848 Mexican Cession; the cession's lands also included the future states of Nevada, Colorado and Utah. The California gold rush of 1848–1849 spurred a huge migration of white settlers to the Pacific coast, leading to even more confrontations with Native populations. One of the most violent, the California genocide of thousands of Native inhabitants, lasted into the mid-1870s. Additional western territories and states were created. Throughout the 1850s, the sectional conflict regarding slavery was further inflamed by national legislation in the U.S. Congress and decisions of the Supreme Court. In Congress, the Fugitive Slave Act of 1850 mandated the forcible return to their owners in the South of slaves taking refuge in non-slave states, while the Kansas–Nebraska Act of 1854 effectively gutted the anti-slavery requirements of the Missouri Compromise. In its Dred Scott decision of 1857, the Supreme Court ruled against a slave brought into non-slave territory, simultaneously declaring the entire Missouri Compromise to be unconstitutional. These and other events exacerbated tensions between North and South that would culminate in the American Civil War (1861–1865). Beginning with South Carolina, 11 slave-state governments voted to secede from the United States in 1861, joining to create the Confederate States of America. All other state governments remained loyal to the Union.[q] War broke out in April 1861 after the Confederacy bombarded Fort Sumter. Following the Emancipation Proclamation on January 1, 1863, many freed slaves joined the Union army. The war began to turn in the Union's favor following the 1863 Siege of Vicksburg and Battle of Gettysburg, and the Confederates surrendered in 1865 after the Union's victory in the Battle of Appomattox Court House. Efforts toward reconstruction in the secessionist South had begun as early as 1862, but it was only after President Lincoln's assassination that the three Reconstruction Amendments to the Constitution were ratified to protect civil rights. The amendments codified nationally the abolition of slavery and involuntary servitude except as punishment for crimes, promised equal protection under the law for all persons, and prohibited discrimination on the basis of race or previous enslavement. As a result, African Americans took an active political role in ex-Confederate states in the decade following the Civil War. The former Confederate states were readmitted to the Union, beginning with Tennessee in 1866 and ending with Georgia in 1870. National infrastructure, including transcontinental telegraph and railroads, spurred growth in the American frontier. This was accelerated by the Homestead Acts, through which nearly 10 percent of the total land area of the United States was given away free to some 1.6 million homesteaders. From 1865 through 1917, an unprecedented stream of immigrants arrived in the United States, including 24.4 million from Europe. Most came through the Port of New York, as New York City and other large cities on the East Coast became home to large Jewish, Irish, and Italian populations. Many Northern Europeans as well as significant numbers of Germans and other Central Europeans moved to the Midwest. At the same time, about one million French Canadians migrated from Quebec to New England. During the Great Migration, millions of African Americans left the rural South for urban areas in the North. Alaska was purchased from Russia in 1867. The Compromise of 1877 is generally considered the end of the Reconstruction era, as it resolved the electoral crisis following the 1876 presidential election and led President Rutherford B. Hayes to reduce the role of federal troops in the South. Immediately, the Redeemers began evicting the Carpetbaggers and quickly regained local control of Southern politics in the name of white supremacy. African Americans endured a period of heightened, overt racism following Reconstruction, a time often considered the nadir of American race relations. A series of Supreme Court decisions, including Plessy v. Ferguson, emptied the Fourteenth and Fifteenth Amendments of their force, allowing Jim Crow laws in the South to remain unchecked, sundown towns in the Midwest, and segregation in communities across the country, which would be reinforced in part by the policy of redlining later adopted by the federal Home Owners' Loan Corporation. An explosion of technological advancement, accompanied by the exploitation of cheap immigrant labor, led to rapid economic expansion during the Gilded Age of the late 19th century. It continued into the early 20th, when the United States already outpaced the economies of Britain, France, and Germany combined. This fostered the amassing of power by a few prominent industrialists, largely by their formation of trusts and monopolies to prevent competition. Tycoons led the nation's expansion in the railroad, petroleum, and steel industries. The United States emerged as a pioneer of the automotive industry. These changes resulted in significant increases in economic inequality, slum conditions, and social unrest, creating the environment for labor unions and socialist movements to begin to flourish. This period eventually ended with the advent of the Progressive Era, which was characterized by significant economic and social reforms. Pro-American elements in Hawaii overthrew the Hawaiian monarchy; the islands were annexed in 1898. That same year, Puerto Rico, the Philippines, and Guam were ceded to the U.S. by Spain after the latter's defeat in the Spanish–American War. (The Philippines was granted full independence from the U.S. on July 4, 1946, following World War II. Puerto Rico and Guam have remained U.S. territories.) American Samoa was acquired by the United States in 1900 after the Second Samoan Civil War. The U.S. Virgin Islands were purchased from Denmark in 1917. The United States entered World War I alongside the Allies in 1917 helping to turn the tide against the Central Powers. In 1920, a constitutional amendment granted nationwide women's suffrage. During the 1920s and 1930s, radio for mass communication and early television transformed communications nationwide. The Wall Street Crash of 1929 triggered the Great Depression, to which President Franklin D. Roosevelt responded with the New Deal plan of "reform, recovery and relief", a series of unprecedented and sweeping recovery programs and employment relief projects combined with financial reforms and regulations. Initially neutral during World War II, the U.S. began supplying war materiel to the Allies of World War II in March 1941 and entered the war in December after Japan's attack on Pearl Harbor. Agreeing to a "Europe first" policy, the U.S. concentrated its wartime efforts on Japan's allies Italy and Germany until their final defeat in May 1945. The U.S. developed the first nuclear weapons and used them against the Japanese cities of Hiroshima and Nagasaki in August 1945, ending the war. The United States was one of the "Four Policemen" who met to plan the post-war world, alongside the United Kingdom, the Soviet Union, and China. The U.S. emerged relatively unscathed from the war, with even greater economic power and international political influence. The end of World War II in 1945 left the U.S. and the Soviet Union as superpowers, each with its own political, military, and economic sphere of influence. Geopolitical tensions between the two superpowers soon led to the Cold War. The U.S. implemented a policy of containment intended to limit the Soviet Union's sphere of influence; engaged in regime change against governments perceived to be aligned with the Soviets; and prevailed in the Space Race, which culminated with the first crewed Moon landing in 1969. Domestically, the U.S. experienced economic growth, urbanization, and population growth following World War II. The civil rights movement emerged, with Martin Luther King Jr. becoming a prominent leader in the early 1960s. The Great Society plan of President Lyndon B. Johnson's administration resulted in groundbreaking and broad-reaching laws, policies and a constitutional amendment to counteract some of the worst effects of lingering institutional racism. The counterculture movement in the U.S. brought significant social changes, including the liberalization of attitudes toward recreational drug use and sexuality. It also encouraged open defiance of the military draft (leading to the end of conscription in 1973) and wide opposition to U.S. intervention in Vietnam, with the U.S. totally withdrawing in 1975. A societal shift in the roles of women was significantly responsible for the large increase in female paid labor participation starting in the 1970s, and by 1985 the majority of American women aged 16 and older were employed. The Fall of Communism and the dissolution of the Soviet Union from 1989 to 1991 marked the end of the Cold War and left the United States as the world's sole superpower. This cemented the United States' global influence, reinforcing the concept of the "American Century" as the U.S. dominated international political, cultural, economic, and military affairs. The 1990s saw the longest recorded economic expansion in American history, a dramatic decline in U.S. crime rates, and advances in technology. Throughout this decade, technological innovations such as the World Wide Web, the evolution of the Pentium microprocessor in accordance with Moore's law, rechargeable lithium-ion batteries, the first gene therapy trial, and cloning either emerged in the U.S. or were improved upon there. The Human Genome Project was formally launched in 1990, while Nasdaq became the first stock market in the United States to trade online in 1998. In the Gulf War of 1991, an American-led international coalition of states expelled an Iraqi invasion force that had occupied neighboring Kuwait. The September 11 attacks on the United States in 2001 by the pan-Islamist militant organization al-Qaeda led to the war on terror and subsequent military interventions in Afghanistan and in Iraq. The U.S. housing bubble culminated in 2007 with the Great Recession, the largest economic contraction since the Great Depression. In the 2010s and early 2020s, the United States has experienced increased political polarization and democratic backsliding. The country's polarization was violently reflected in the January 2021 Capitol attack, when a mob of insurrectionists entered the U.S. Capitol and sought to prevent the peaceful transfer of power in an attempted self-coup d'état. Geography The United States is the world's third-largest country by total area behind Russia and Canada.[c] The 48 contiguous states and the District of Columbia have a combined area of 3,119,885 square miles (8,080,470 km2). In 2021, the United States had 8% of the Earth's permanent meadows and pastures and 10% of its cropland. Starting in the east, the coastal plain of the Atlantic seaboard gives way to inland forests and rolling hills in the Piedmont plateau region. The Appalachian Mountains and the Adirondack Massif separate the East Coast from the Great Lakes and the grasslands of the Midwest. The Mississippi River System, the world's fourth-longest river system, runs predominantly north–south through the center of the country. The flat and fertile prairie of the Great Plains stretches to the west, interrupted by a highland region in the southeast. The Rocky Mountains, west of the Great Plains, extend north to south across the country, peaking at over 14,000 feet (4,300 m) in Colorado. The supervolcano underlying Yellowstone National Park in the Rocky Mountains, the Yellowstone Caldera, is the continent's largest volcanic feature. Farther west are the rocky Great Basin and the Chihuahuan, Sonoran, and Mojave deserts. In the northwest corner of Arizona, carved by the Colorado River, is the Grand Canyon, a steep-sided canyon and popular tourist destination known for its overwhelming visual size and intricate, colorful landscape. The Cascade and Sierra Nevada mountain ranges run close to the Pacific coast. The lowest and highest points in the contiguous United States are in the State of California, about 84 miles (135 km) apart. At an elevation of 20,310 feet (6,190.5 m), Alaska's Denali (also called Mount McKinley) is the highest peak in the country and on the continent. Active volcanoes in the U.S. are common throughout Alaska's Alexander and Aleutian Islands. Located entirely outside North America, the archipelago of Hawaii consists of volcanic islands, physiographically and ethnologically part of the Polynesian subregion of Oceania. In addition to its total land area, the United States has one of the world's largest marine exclusive economic zones spanning approximately 4.5 million square miles (11.7 million km2) of ocean. With its large size and geographic variety, the United States includes most climate types. East of the 100th meridian, the climate ranges from humid continental in the north to humid subtropical in the south. The western Great Plains are semi-arid. Many mountainous areas of the American West have an alpine climate. The climate is arid in the Southwest, Mediterranean in coastal California, and oceanic in coastal Oregon, Washington, and southern Alaska. Most of Alaska is subarctic or polar. Hawaii, the southern tip of Florida and U.S. territories in the Caribbean and Pacific are tropical. The United States receives more high-impact extreme weather incidents than any other country. States bordering the Gulf of Mexico are prone to hurricanes, and most of the world's tornadoes occur in the country, mainly in Tornado Alley. Due to climate change in the country, extreme weather has become more frequent in the U.S. in the 21st century, with three times the number of reported heat waves compared to the 1960s. Since the 1990s, droughts in the American Southwest have become more persistent and more severe. The regions considered as the most attractive to the population are the most vulnerable. The U.S. is one of 17 megadiverse countries containing large numbers of endemic species: about 17,000 species of vascular plants occur in the contiguous United States and Alaska, and over 1,800 species of flowering plants are found in Hawaii, few of which occur on the mainland. The United States is home to 428 mammal species, 784 birds, 311 reptiles, 295 amphibians, and around 91,000 insect species. There are 63 national parks, and hundreds of other federally managed monuments, forests, and wilderness areas, administered by the National Park Service and other agencies. About 28% of the country's land is publicly owned and federally managed, primarily in the Western States. Most of this land is protected, though some is leased for commercial use, and less than one percent is used for military purposes. Environmental issues in the United States include debates on non-renewable resources and nuclear energy, air and water pollution, biodiversity, logging and deforestation, and climate change. The U.S. Environmental Protection Agency (EPA) is the federal agency charged with addressing most environmental-related issues. The idea of wilderness has shaped the management of public lands since 1964, with the Wilderness Act. The Endangered Species Act of 1973 provides a way to protect threatened and endangered species and their habitats. The United States Fish and Wildlife Service implements and enforces the Act. In 2024, the U.S. ranked 35th among 180 countries in the Environmental Performance Index. Government and politics The United States is a federal republic of 50 states and a federal capital district, Washington, D.C. The U.S. asserts sovereignty over five unincorporated territories and several uninhabited island possessions. It is the world's oldest surviving federation, and its presidential system of federal government has been adopted, in whole or in part, by many newly independent states worldwide following their decolonization. The Constitution of the United States serves as the country's supreme legal document. Most scholars describe the United States as a liberal democracy.[r] Composed of three branches, all headquartered in Washington, D.C., the federal government is the national government of the United States. The U.S. Constitution establishes a separation of powers intended to provide a system of checks and balances to prevent any of the three branches from becoming supreme. The three-branch system is known as the presidential system, in contrast to the parliamentary system where the executive is part of the legislative body. Many countries around the world adopted this aspect of the 1789 Constitution of the United States, especially in the postcolonial Americas. In the U.S. federal system, sovereign powers are shared between three levels of government specified in the Constitution: the federal government, the states, and Indian tribes. The U.S. also asserts sovereignty over five permanently inhabited territories: American Samoa, Guam, the Northern Mariana Islands, Puerto Rico, and the U.S. Virgin Islands. Residents of the 50 states are governed by their elected state government, under state constitutions compatible with the national constitution, and by elected local governments that are administrative divisions of a state. States are subdivided into counties or county equivalents, and (except for Hawaii) further divided into municipalities, each administered by elected representatives. The District of Columbia is a federal district containing the U.S. capital, Washington, D.C. The federal district is an administrative division of the federal government. Indian country is made up of 574 federally recognized tribes and 326 Indian reservations. They hold a government-to-government relationship with the U.S. federal government in Washington and are legally defined as domestic dependent nations with inherent tribal sovereignty rights. In addition to the five major territories, the U.S. also asserts sovereignty over the United States Minor Outlying Islands in the Pacific Ocean and the Caribbean. The seven undisputed islands without permanent populations are Baker Island, Howland Island, Jarvis Island, Johnston Atoll, Kingman Reef, Midway Atoll, and Palmyra Atoll. U.S. sovereignty over the unpopulated Bajo Nuevo Bank, Navassa Island, Serranilla Bank, and Wake Island is disputed. The Constitution is silent on political parties. However, they developed independently in the 18th century with the Federalist and Anti-Federalist parties. Since then, the United States has operated as a de facto two-party system, though the parties have changed over time. Since the mid-19th century, the two main national parties have been the Democratic Party and the Republican Party. The former is perceived as relatively liberal in its political platform while the latter is perceived as relatively conservative in its platform. The United States has an established structure of foreign relations, with the world's second-largest diplomatic corps as of 2024[update]. It is a permanent member of the United Nations Security Council and home to the United Nations headquarters. The United States is a member of the G7, G20, and OECD intergovernmental organizations. Almost all countries have embassies and many have consulates (official representatives) in the country. Likewise, nearly all countries host formal diplomatic missions with the United States, except Iran, North Korea, and Bhutan. Though Taiwan does not have formal diplomatic relations with the U.S., it maintains close unofficial relations. The United States regularly supplies Taiwan with military equipment to deter potential Chinese aggression. Its geopolitical attention also turned to the Indo-Pacific when the United States joined the Quadrilateral Security Dialogue with Australia, India, and Japan. The United States has a "Special Relationship" with the United Kingdom and strong ties with Canada, Australia, New Zealand, the Philippines, Japan, South Korea, Israel, and several European Union countries such as France, Italy, Germany, Spain, and Poland. The U.S. works closely with its NATO allies on military and national security issues, and with countries in the Americas through the Organization of American States and the United States–Mexico–Canada Free Trade Agreement. The U.S. exercises full international defense authority and responsibility for Micronesia, the Marshall Islands, and Palau through the Compact of Free Association. It has increasingly conducted strategic cooperation with India, while its ties with China have steadily deteriorated. Beginning in 2014, the U.S. had become a key ally of Ukraine. After Donald Trump was elected U.S. president in 2024, he sought to negotiate an end to the Russo-Ukrainian War. He paused all military aid to Ukraine in March 2025, although the aid resumed later. Trump also ended U.S. intelligence sharing with the country, but this too was eventually restored. The president is the commander-in-chief of the United States Armed Forces and appoints its leaders, the secretary of defense and the Joint Chiefs of Staff. The Department of Defense, headquartered at the Pentagon near Washington, D.C., administers five of the six service branches, which are made up of the U.S. Army, Marine Corps, Navy, Air Force, and Space Force. The Coast Guard is administered by the Department of Homeland Security in peacetime and can be transferred to the Department of the Navy in wartime. Total strength of the entire military is about 1.3 million active duty with an additional 400,000 in reserve. The United States spent $997 billion on its military in 2024, which is by far the largest amount of any country, making up 37% of global military spending and accounting for 3.4% of the country's GDP. The U.S. possesses 42% of the world's nuclear weapons—the second-largest stockpile after that of Russia. The U.S. military is widely regarded as the most powerful and advanced in the world. The United States has the third-largest combined armed forces in the world, behind the Chinese People's Liberation Army and Indian Armed Forces. The U.S. military operates about 800 bases and facilities abroad, and maintains deployments greater than 100 active duty personnel in 25 foreign countries. The United States has engaged in over 400 military interventions since its founding in 1776, with over half of these occurring between 1950 and 2019 and 25% occurring in the post-Cold War era. State defense forces (SDFs) are military units that operate under the sole authority of a state government. SDFs are authorized by state and federal law but are under the command of the state's governor. By contrast, the 54 U.S. National Guard organizations[t] fall under the dual control of state or territorial governments and the federal government; their units can also become federalized entities, but SDFs cannot be federalized. The National Guard personnel of a state or territory can be federalized by the president under the National Defense Act Amendments of 1933; this legislation created the Guard and provides for the integration of Army National Guard and Air National Guard units and personnel into the U.S. Army and (since 1947) the U.S. Air Force. The total number of National Guard members is about 430,000, while the estimated combined strength of SDFs is less than 10,000. There are about 18,000 U.S. police agencies from local to national level in the United States. Law in the United States is mainly enforced by local police departments and sheriff departments in their municipal or county jurisdictions. The state police departments have authority in their respective state, and federal agencies such as the Federal Bureau of Investigation (FBI) and the U.S. Marshals Service have national jurisdiction and specialized duties, such as protecting civil rights, national security, enforcing U.S. federal courts' rulings and federal laws, and interstate criminal activity. State courts conduct almost all civil and criminal trials, while federal courts adjudicate the much smaller number of civil and criminal cases that relate to federal law. There is no unified "criminal justice system" in the United States. The American prison system is largely heterogenous, with thousands of relatively independent systems operating across federal, state, local, and tribal levels. In 2025, "these systems hold nearly 2 million people in 1,566 state prisons, 98 federal prisons, 3,116 local jails, 1,277 juvenile correctional facilities, 133 immigration detention facilities, and 80 Indian country jails, as well as in military prisons, civil commitment centers, state psychiatric hospitals, and prisons in the U.S. territories." Despite disparate systems of confinement, four main institutions dominate: federal prisons, state prisons, local jails, and juvenile correctional facilities. Federal prisons are run by the Federal Bureau of Prisons and hold pretrial detainees as well as people who have been convicted of federal crimes. State prisons, run by the department of corrections of each state, hold people sentenced and serving prison time (usually longer than one year) for felony offenses. Local jails are county or municipal facilities that incarcerate defendants prior to trial; they also hold those serving short sentences (typically under a year). Juvenile correctional facilities are operated by local or state governments and serve as longer-term placements for any minor adjudicated as delinquent and ordered by a judge to be confined. In January 2023, the United States had the sixth-highest per capita incarceration rate in the world—531 people per 100,000 inhabitants—and the largest prison and jail population in the world, with more than 1.9 million people incarcerated. An analysis of the World Health Organization Mortality Database from 2010 showed U.S. homicide rates "were 7 times higher than in other high-income countries, driven by a gun homicide rate that was 25 times higher". Economy The U.S. has a highly developed mixed economy that has been the world's largest nominally since about 1890. Its 2024 gross domestic product (GDP)[e] of more than $29 trillion constituted over 25% of nominal global economic output, or 15% at purchasing power parity (PPP). From 1983 to 2008, U.S. real compounded annual GDP growth was 3.3%, compared to a 2.3% weighted average for the rest of the G7. The country ranks first in the world by nominal GDP, second when adjusted for purchasing power parities (PPP), and ninth by PPP-adjusted GDP per capita. In February 2024, the total U.S. federal government debt was $34.4 trillion. Of the world's 500 largest companies by revenue, 138 were headquartered in the U.S. in 2025, the highest number of any country. The U.S. dollar is the currency most used in international transactions and the world's foremost reserve currency, backed by the country's dominant economy, its military, the petrodollar system, its large U.S. treasuries market, and its linked eurodollar. Several countries use it as their official currency, and in others it is the de facto currency. The U.S. has free trade agreements with several countries, including the USMCA. Although the United States has reached a post-industrial level of economic development and is often described as having a service economy, it remains a major industrial power; in 2024, the U.S. manufacturing sector was the world's second-largest by value output after China's. New York City is the world's principal financial center, and its metropolitan area is the world's largest metropolitan economy. The New York Stock Exchange and Nasdaq, both located in New York City, are the world's two largest stock exchanges by market capitalization and trade volume. The United States is at the forefront of technological advancement and innovation in many economic fields, especially in artificial intelligence; electronics and computers; pharmaceuticals; and medical, aerospace and military equipment. The country's economy is fueled by abundant natural resources, a well-developed infrastructure, and high productivity. The largest trading partners of the United States are the European Union, Mexico, Canada, China, Japan, South Korea, the United Kingdom, Vietnam, India, and Taiwan. The United States is the world's largest importer and second-largest exporter.[u] It is by far the world's largest exporter of services. Americans have the highest average household and employee income among OECD member states, and the fourth-highest median household income in 2023, up from sixth-highest in 2013. With personal consumption expenditures of over $18.5 trillion in 2023, the U.S. has a heavily consumer-driven economy and is the world's largest consumer market. The U.S. ranked first in the number of dollar billionaires and millionaires in 2023, with 735 billionaires and nearly 22 million millionaires. Wealth in the United States is highly concentrated; in 2011, the richest 10% of the adult population owned 72% of the country's household wealth, while the bottom 50% owned just 2%. U.S. wealth inequality increased substantially since the late 1980s, and income inequality in the U.S. reached a record high in 2019. In 2024, the country had some of the highest wealth and income inequality levels among OECD countries. Since the 1970s, there has been a decoupling of U.S. wage gains from worker productivity. In 2016, the top fifth of earners took home more than half of all income, giving the U.S. one of the widest income distributions among OECD countries. There were about 771,480 homeless persons in the U.S. in 2024. In 2022, 6.4 million children experienced food insecurity. Feeding America estimates that around one in five, or approximately 13 million, children experience hunger in the U.S. and do not know where or when they will get their next meal. Also in 2022, about 37.9 million people, or 11.5% of the U.S. population, were living in poverty. The United States has a smaller welfare state and redistributes less income through government action than most other high-income countries. It is the only advanced economy that does not guarantee its workers paid vacation nationally and one of a few countries in the world without federal paid family leave as a legal right. The United States has a higher percentage of low-income workers than almost any other developed country, largely because of a weak collective bargaining system and lack of government support for at-risk workers. The United States has been a leader in technological innovation since the late 19th century and scientific research since the mid-20th century. Methods for producing interchangeable parts and the establishment of a machine tool industry enabled the large-scale manufacturing of U.S. consumer products in the late 19th century. By the early 20th century, factory electrification, the introduction of the assembly line, and other labor-saving techniques created the system of mass production. In the 21st century, the United States continues to be one of the world's foremost scientific powers, though China has emerged as a major competitor in many fields. The U.S. has the highest research and development expenditures of any country and ranks ninth as a percentage of GDP. In 2022, the United States was (after China) the country with the second-highest number of published scientific papers. In 2021, the U.S. ranked second (also after China) by the number of patent applications, and third by trademark and industrial design applications (after China and Germany), according to World Intellectual Property Indicators. In 2025 the United States ranked third (after Switzerland and Sweden) in the Global Innovation Index. The United States is considered to be a world leader in the development of artificial intelligence technology. In 2023, the United States was ranked the second most technologically advanced country in the world (after South Korea) by Global Finance magazine. The United States has maintained a space program since the late 1950s, beginning with the establishment of the National Aeronautics and Space Administration (NASA) in 1958. NASA's Apollo program (1961–1972) achieved the first crewed Moon landing with the 1969 Apollo 11 mission; it remains one of the agency's most significant milestones. Other major endeavors by NASA include the Space Shuttle program (1981–2011), the Voyager program (1972–present), the Hubble and James Webb space telescopes (launched in 1990 and 2021, respectively), and the multi-mission Mars Exploration Program (Spirit and Opportunity, Curiosity, and Perseverance). NASA is one of five agencies collaborating on the International Space Station (ISS); U.S. contributions to the ISS include several modules, including Destiny (2001), Harmony (2007), and Tranquility (2010), as well as ongoing logistical and operational support. The United States private sector dominates the global commercial spaceflight industry. Prominent American spaceflight contractors include Blue Origin, Boeing, Lockheed Martin, Northrop Grumman, and SpaceX. NASA programs such as the Commercial Crew Program, Commercial Resupply Services, Commercial Lunar Payload Services, and NextSTEP have facilitated growing private-sector involvement in American spaceflight. In 2023, the United States received approximately 84% of its energy from fossil fuel, and its largest source of energy was petroleum (38%), followed by natural gas (36%), renewable sources (9%), coal (9%), and nuclear power (9%). In 2022, the United States constituted about 4% of the world's population, but consumed around 16% of the world's energy. The U.S. ranks as the second-highest emitter of greenhouse gases behind China. The U.S. is the world's largest producer of nuclear power, generating around 30% of the world's nuclear electricity. It also has the highest number of nuclear power reactors of any country. From 2024, the U.S. plans to triple its nuclear power capacity by 2050. The United States' 4 million miles (6.4 million kilometers) of road network, owned almost entirely by state and local governments, is the longest in the world. The extensive Interstate Highway System that connects all major U.S. cities is funded mostly by the federal government but maintained by state departments of transportation. The system is further extended by state highways and some private toll roads. The U.S. is among the top ten countries with the highest vehicle ownership per capita (850 vehicles per 1,000 people) in 2022. A 2022 study found that 76% of U.S. commuters drive alone and 14% ride a bicycle, including bike owners and users of bike-sharing networks. About 11% use some form of public transportation. Public transportation in the United States is well developed in the largest urban areas, notably New York City, Washington, D.C., Boston, Philadelphia, Chicago, and San Francisco; otherwise, coverage is generally less extensive than in most other developed countries. The U.S. also has many relatively car-dependent localities. Long-distance intercity travel is provided primarily by airlines, but travel by rail is more common along the Northeast Corridor, the only high-speed rail in the U.S. that meets international standards. Amtrak, the country's government-sponsored national passenger rail company, has a relatively sparse network compared to that of Western European countries. Service is concentrated in the Northeast, California, the Midwest, the Pacific Northwest, and Virginia/Southeast. The United States has an extensive air transportation network. U.S. civilian airlines are all privately owned. The three largest airlines in the world, by total number of passengers carried, are U.S.-based; American Airlines became the global leader after its 2013 merger with US Airways. Of the 50 busiest airports in the world, 16 are in the United States, as well as five of the top 10. The world's busiest airport by passenger volume is Hartsfield–Jackson Atlanta International in Atlanta, Georgia. In 2022, most of the 19,969 U.S. airports were owned and operated by local government authorities, and there are also some private airports. Some 5,193 are designated as "public use", including for general aviation. The Transportation Security Administration (TSA) has provided security at most major airports since 2001. The country's rail transport network, the longest in the world at 182,412.3 mi (293,564.2 km), handles mostly freight (in contrast to more passenger-centered rail in Europe). Because they are often privately owned operations, U.S. railroads lag behind those of the rest of the world in terms of electrification. The country's inland waterways are the world's fifth-longest, totaling 25,482 mi (41,009 km). They are used extensively for freight, recreation, and a small amount of passenger traffic. Of the world's 50 busiest container ports, four are located in the United States, with the busiest in the country being the Port of Los Angeles. Demographics The U.S. Census Bureau reported 331,449,281 residents on April 1, 2020,[v] making the United States the third-most-populous country in the world, after India and China. The Census Bureau's official 2025 population estimate was 341,784,857, an increase of 3.1% since the 2020 census. According to the Bureau's U.S. Population Clock, on July 1, 2024, the U.S. population had a net gain of one person every 16 seconds, or about 5400 people per day. In 2023, 51% of Americans age 15 and over were married, 6% were widowed, 10% were divorced, and 34% had never been married. In 2023, the total fertility rate for the U.S. stood at 1.6 children per woman, and, at 23%, it had the world's highest rate of children living in single-parent households in 2019. Most Americans live in the suburbs of major metropolitan areas. The United States has a diverse population; 37 ancestry groups have more than one million members. White Americans with ancestry from Europe, the Middle East, or North Africa form the largest racial and ethnic group at 57.8% of the United States population. Hispanic and Latino Americans form the second-largest group and are 18.7% of the United States population. African Americans constitute the country's third-largest ancestry group and are 12.1% of the total U.S. population. Asian Americans are the country's fourth-largest group, composing 5.9% of the United States population. The country's 3.7 million Native Americans account for about 1%, and some 574 native tribes are recognized by the federal government. In 2024, the median age of the United States population was 39.1 years. While many languages and dialects are spoken in the United States, English is by far the most commonly spoken and written. De facto, English is the official language of the United States, and in 2025, Executive Order 14224 declared English official. However, the U.S. has never had a de jure official language, as Congress has never passed a law to designate English as official for all three federal branches. Some laws, such as U.S. naturalization requirements, nonetheless standardize English. Twenty-eight states and the United States Virgin Islands have laws that designate English as the sole official language; 19 states and the District of Columbia have no official language. Three states and four U.S. territories have recognized local or indigenous languages in addition to English: Hawaii (Hawaiian), Alaska (twenty Native languages),[w] South Dakota (Sioux), American Samoa (Samoan), Puerto Rico (Spanish), Guam (Chamorro), and the Northern Mariana Islands (Carolinian and Chamorro). In total, 169 Native American languages are spoken in the United States. In Puerto Rico, Spanish is more widely spoken than English. According to the American Community Survey (2020), some 245.4 million people in the U.S. age five and older spoke only English at home. About 41.2 million spoke Spanish at home, making it the second most commonly used language. Other languages spoken at home by one million people or more include Chinese (3.40 million), Tagalog (1.71 million), Vietnamese (1.52 million), Arabic (1.39 million), French (1.18 million), Korean (1.07 million), and Russian (1.04 million). German, spoken by 1 million people at home in 2010, fell to 857,000 total speakers in 2020. America's immigrant population is by far the world's largest in absolute terms. In 2022, there were 87.7 million immigrants and U.S.-born children of immigrants in the United States, accounting for nearly 27% of the overall U.S. population. In 2017, out of the U.S. foreign-born population, some 45% (20.7 million) were naturalized citizens, 27% (12.3 million) were lawful permanent residents, 6% (2.2 million) were temporary lawful residents, and 23% (10.5 million) were unauthorized immigrants. In 2019, the top countries of origin for immigrants were Mexico (24% of immigrants), India (6%), China (5%), the Philippines (4.5%), and El Salvador (3%). In fiscal year 2022, over one million immigrants (most of whom entered through family reunification) were granted legal residence. The undocumented immigrant population in the U.S. reached a record high of 14 million in 2023. The First Amendment guarantees the free exercise of religion in the country and forbids Congress from passing laws respecting its establishment. Religious practice is widespread, among the most diverse in the world, and profoundly vibrant. The country has the world's largest Christian population, which includes the fourth-largest population of Catholics. Other notable faiths include Judaism, Buddhism, Hinduism, Islam, New Age, and Native American religions. Religious practice varies significantly by region. "Ceremonial deism" is common in American culture. The overwhelming majority of Americans believe in a higher power or spiritual force, engage in spiritual practices such as prayer, and consider themselves religious or spiritual. In the Southern United States' "Bible Belt", evangelical Protestantism plays a significant role culturally; New England and the Western United States tend to be more secular. Mormonism, a Restorationist movement founded in the U.S. in 1847, is the predominant religion in Utah and a major religion in Idaho. About 82% of Americans live in metropolitan areas, particularly in suburbs; about half of those reside in cities with populations over 50,000. In 2022, 333 incorporated municipalities had populations over 100,000, nine cities had more than one million residents, and four cities—New York City, Los Angeles, Chicago, and Houston—had populations exceeding two million. Many U.S. metropolitan populations are growing rapidly, particularly in the South and West. According to the Centers for Disease Control and Prevention (CDC), average U.S. life expectancy at birth reached 79.0 years in 2024, its highest recorded level. This was an increase of 0.6 years over 2023. The CDC attributed the improvement to a significant fall in the number of fatal drug overdoses in the country, noting that "heart disease continues to be the leading cause of death in the United States, followed by cancer and unintentional injuries." In 2024, life expectancy at birth for American men rose to 76.5 years (+0.7 years compared to 2023), while life expectancy for women was 81.4 years (+0.3 years). Starting in 1998, life expectancy in the U.S. fell behind that of other wealthy industrialized countries, and Americans' "health disadvantage" gap has been increasing ever since. The Commonwealth Fund reported in 2020 that the U.S. had the highest suicide rate among high-income countries. Approximately one-third of the U.S. adult population is obese and another third is overweight. The U.S. healthcare system far outspends that of any other country, measured both in per capita spending and as a percentage of GDP, but attains worse healthcare outcomes when compared to peer countries for reasons that are debated. The United States is the only developed country without a system of universal healthcare, and a significant proportion of the population that does not carry health insurance. Government-funded healthcare coverage for the poor (Medicaid) and for those age 65 and older (Medicare) is available to Americans who meet the programs' income or age qualifications. In 2010, then-President Obama passed the Patient Protection and Affordable Care Act.[x] Abortion in the United States is not federally protected, and is illegal or restricted in 17 states. American primary and secondary education, known in the U.S. as K–12 ("kindergarten through 12th grade"), is decentralized. School systems are operated by state, territorial, and sometimes municipal governments and regulated by the U.S. Department of Education. In general, children are required to attend school or an approved homeschool from the age of five or six (kindergarten or first grade) until they are 18 years old. This often brings students through the 12th grade, the final year of a U.S. high school, but some states and territories allow them to leave school earlier, at age 16 or 17. The U.S. spends more on education per student than any other country, an average of $18,614 per year per public elementary and secondary school student in 2020–2021. Among Americans age 25 and older, 92.2% graduated from high school, 62.7% attended some college, 37.7% earned a bachelor's degree, and 14.2% earned a graduate degree. The U.S. literacy rate is near-universal. The U.S. has produced the most Nobel Prize winners of any country, with 411 (having won 413 awards). U.S. tertiary or higher education has earned a global reputation. Many of the world's top universities, as listed by various ranking organizations, are in the United States, including 19 of the top 25. American higher education is dominated by state university systems, although the country's many private universities and colleges enroll about 20% of all American students. Local community colleges generally offer open admissions, lower tuition, and coursework leading to a two-year associate degree or a non-degree certificate. As for public expenditures on higher education, the U.S. spends more per student than the OECD average, and Americans spend more than all nations in combined public and private spending. Colleges and universities directly funded by the federal government do not charge tuition and are limited to military personnel and government employees, including: the U.S. service academies, the Naval Postgraduate School, and military staff colleges. Despite some student loan forgiveness programs in place, student loan debt increased by 102% between 2010 and 2020, and exceeded $1.7 trillion in 2022. Culture and society The United States is home to a wide variety of ethnic groups, traditions, and customs. The country has been described as having the values of individualism and personal autonomy, as well as a strong work ethic and competitiveness. Voluntary altruism towards others also plays a major role; according to a 2016 study by the Charities Aid Foundation, Americans donated 1.44% of total GDP to charity—the highest rate in the world by a large margin. Americans have traditionally been characterized by a unifying political belief in an "American Creed" emphasizing consent of the governed, liberty, equality under the law, democracy, social equality, property rights, and a preference for limited government. The U.S. has acquired significant hard and soft power through its diplomatic influence, economic power, military alliances, and cultural exports such as American movies, music, video games, sports, and food. The influence that the United States exerts on other countries through soft power is referred to as Americanization. Nearly all present Americans or their ancestors came from Europe, Africa, or Asia (the "Old World") within the past five centuries. Mainstream American culture is a Western culture largely derived from the traditions of European immigrants with influences from many other sources, such as traditions brought by slaves from Africa. More recent immigration from Asia and especially Latin America has added to a cultural mix that has been described as a homogenizing melting pot, and a heterogeneous salad bowl, with immigrants contributing to, and often assimilating into, mainstream American culture. Under the First Amendment to the Constitution, the United States is considered to have the strongest protections of free speech of any country. Flag desecration, hate speech, blasphemy, and lese majesty are all forms of protected expression. A 2016 Pew Research Center poll found that Americans were the most supportive of free expression of any polity measured. Additionally, they are the "most supportive of freedom of the press and the right to use the Internet without government censorship". The U.S. is a socially progressive country with permissive attitudes surrounding human sexuality. LGBTQ rights in the United States are among the most advanced by global standards. The American Dream, or the perception that Americans enjoy high levels of social mobility, plays a key role in attracting immigrants. Whether this perception is accurate has been a topic of debate. While mainstream culture holds that the United States is a classless society, scholars identify significant differences between the country's social classes, affecting socialization, language, and values. Americans tend to greatly value socioeconomic achievement, but being ordinary or average is promoted by some as a noble condition as well. The National Foundation on the Arts and the Humanities is an agency of the United States federal government that was established in 1965 with the purpose to "develop and promote a broadly conceived national policy of support for the humanities and the arts in the United States, and for institutions which preserve the cultural heritage of the United States." It is composed of four sub-agencies: Colonial American authors were influenced by John Locke and other Enlightenment philosophers. The American Revolutionary Period (1765–1783) is notable for the political writings of Benjamin Franklin, Alexander Hamilton, Thomas Paine, and Thomas Jefferson. Shortly before and after the Revolutionary War, the newspaper rose to prominence, filling a demand for anti-British national literature. An early novel is William Hill Brown's The Power of Sympathy, published in 1791. Writer and critic John Neal in the early- to mid-19th century helped advance America toward a unique literature and culture by criticizing predecessors such as Washington Irving for imitating their British counterparts, and by influencing writers such as Edgar Allan Poe, who took American poetry and short fiction in new directions. Ralph Waldo Emerson and Margaret Fuller pioneered the influential Transcendentalism movement; Henry David Thoreau, author of Walden, was influenced by this movement. The conflict surrounding abolitionism inspired writers, like Harriet Beecher Stowe, and authors of slave narratives, such as Frederick Douglass. Nathaniel Hawthorne's The Scarlet Letter (1850) explored the dark side of American history, as did Herman Melville's Moby-Dick (1851). Major American poets of the 19th century American Renaissance include Walt Whitman, Melville, and Emily Dickinson. Mark Twain was the first major American writer to be born in the West. Henry James achieved international recognition with novels like The Portrait of a Lady (1881). As literacy rates rose, periodicals published more stories centered around industrial workers, women, and the rural poor. Naturalism, regionalism, and realism were the major literary movements of the period. While modernism generally took on an international character, modernist authors working within the United States more often rooted their work in specific regions, peoples, and cultures. Following the Great Migration to northern cities, African-American and black West Indian authors of the Harlem Renaissance developed an independent tradition of literature that rebuked a history of inequality and celebrated black culture. An important cultural export during the Jazz Age, these writings were a key influence on Négritude, a philosophy emerging in the 1930s among francophone writers of the African diaspora. In the 1950s, an ideal of homogeneity led many authors to attempt to write the Great American Novel, while the Beat Generation rejected this conformity, using styles that elevated the impact of the spoken word over mechanics to describe drug use, sexuality, and the failings of society. Contemporary literature is more pluralistic than in previous eras, with the closest thing to a unifying feature being a trend toward self-conscious experiments with language. Twelve American laureates have won the Nobel Prize in Literature. Media in the United States is broadly uncensored, with the First Amendment providing significant protections, as reiterated in New York Times Co. v. United States. The four major broadcasters in the U.S. are the National Broadcasting Company (NBC), Columbia Broadcasting System (CBS), American Broadcasting Company (ABC), and Fox Broadcasting Company (Fox). The four major broadcast television networks are all commercial entities. The U.S. cable television system offers hundreds of channels catering to a variety of niches. In 2021, about 83% of Americans over age 12 listened to broadcast radio, while about 40% listened to podcasts. In the prior year, there were 15,460 licensed full-power radio stations in the U.S. according to the Federal Communications Commission (FCC). Much of the public radio broadcasting is supplied by National Public Radio (NPR), incorporated in February 1970 under the Public Broadcasting Act of 1967. U.S. newspapers with a global reach and reputation include The Wall Street Journal, The New York Times, The Washington Post, and USA Today. About 800 publications are produced in Spanish. With few exceptions, newspapers are privately owned, either by large chains such as Gannett or McClatchy, which own dozens or even hundreds of newspapers; by small chains that own a handful of papers; or, in an increasingly rare situation, by individuals or families. Major cities often have alternative newspapers to complement the mainstream daily papers, such as The Village Voice in New York City and LA Weekly in Los Angeles. The five most-visited websites in the world are Google, YouTube, Facebook, Instagram, and ChatGPT—all of them American-owned. Other popular platforms used include X (formerly Twitter) and Amazon. In 2025, the U.S. was the world's second-largest video game market by revenue (after China). In 2015, the U.S. video game industry consisted of 2,457 companies that employed around 220,000 jobs and generated $30.4 billion in revenue. There are 444 game publishers, developers, and hardware companies in California alone. According to the Game Developers Conference (GDC), the U.S. is the top location for video game development, with 58% of the world's game developers based there in 2025. The United States is well known for its theater. Mainstream theater in the United States derives from the old European theatrical tradition and has been heavily influenced by the British theater. By the middle of the 19th century, America had created new distinct dramatic forms in the Tom Shows, the showboat theater and the minstrel show. The central hub of the American theater scene is the Theater District in Manhattan, with its divisions of Broadway, off-Broadway, and off-off-Broadway. Many movie and television celebrities have gotten their big break working in New York productions. Outside New York City, many cities have professional regional or resident theater companies that produce their own seasons. The biggest-budget theatrical productions are musicals. U.S. theater has an active community theater culture. The Tony Awards recognizes excellence in live Broadway theater and are presented at an annual ceremony in Manhattan. The awards are given for Broadway productions and performances. One is also given for regional theater. Several discretionary non-competitive awards are given as well, including a Special Tony Award, the Tony Honors for Excellence in Theatre, and the Isabelle Stevenson Award. Folk art in colonial America grew out of artisanal craftsmanship in communities that allowed commonly trained people to individually express themselves. It was distinct from Europe's tradition of high art, which was less accessible and generally less relevant to early American settlers. Cultural movements in art and craftsmanship in colonial America generally lagged behind those of Western Europe. For example, the prevailing medieval style of woodworking and primitive sculpture became integral to early American folk art, despite the emergence of Renaissance styles in England in the late 16th and early 17th centuries. The new English styles would have been early enough to make a considerable impact on American folk art, but American styles and forms had already been firmly adopted. Not only did styles change slowly in early America, but there was a tendency for rural artisans there to continue their traditional forms longer than their urban counterparts did—and far longer than those in Western Europe. The Hudson River School was a mid-19th-century movement in the visual arts tradition of European naturalism. The 1913 Armory Show in New York City, an exhibition of European modernist art, shocked the public and transformed the U.S. art scene. American Realism and American Regionalism sought to reflect and give America new ways of looking at itself. Georgia O'Keeffe, Marsden Hartley, and others experimented with new and individualistic styles, which would become known as American modernism. Major artistic movements such as the abstract expressionism of Jackson Pollock and Willem de Kooning and the pop art of Andy Warhol and Roy Lichtenstein developed largely in the United States. Major photographers include Alfred Stieglitz, Edward Steichen, Dorothea Lange, Edward Weston, James Van Der Zee, Ansel Adams, and Gordon Parks. The tide of modernism and then postmodernism has brought global fame to American architects, including Frank Lloyd Wright, Philip Johnson, and Frank Gehry. The Metropolitan Museum of Art in Manhattan is the largest art museum in the United States and the fourth-largest in the world. American folk music encompasses numerous music genres, variously known as traditional music, traditional folk music, contemporary folk music, or roots music. Many traditional songs have been sung within the same family or folk group for generations, and sometimes trace back to such origins as the British Isles, mainland Europe, or Africa. The rhythmic and lyrical styles of African-American music in particular have influenced American music. Banjos were brought to America through the slave trade. Minstrel shows incorporating the instrument into their acts led to its increased popularity and widespread production in the 19th century. The electric guitar, first invented in the 1930s, and mass-produced by the 1940s, had an enormous influence on popular music, in particular due to the development of rock and roll. The synthesizer, turntablism, and electronic music were also largely developed in the U.S. Elements from folk idioms such as the blues and old-time music were adopted and transformed into popular genres with global audiences. Jazz grew from blues and ragtime in the early 20th century, developing from the innovations and recordings of composers such as W.C. Handy and Jelly Roll Morton. Louis Armstrong and Duke Ellington increased its popularity early in the 20th century. Country music developed in the 1920s, bluegrass and rhythm and blues in the 1940s, and rock and roll in the 1950s. In the 1960s, Bob Dylan emerged from the folk revival to become one of the country's most celebrated songwriters. The musical forms of punk and hip hop both originated in the United States in the 1970s. The United States has the world's largest music market, with a total retail value of $15.9 billion in 2022. Most of the world's major record companies are based in the U.S.; they are represented by the Recording Industry Association of America (RIAA). Mid-20th-century American pop stars, such as Frank Sinatra and Elvis Presley, became global celebrities and best-selling music artists, as have artists of the late 20th century, such as Michael Jackson, Madonna, Whitney Houston, and Mariah Carey, and of the early 21st century, such as Eminem, Britney Spears, Lady Gaga, Katy Perry, Taylor Swift and Beyoncé. The United States has the world's largest apparel market by revenue. Apart from professional business attire, American fashion is eclectic and predominantly informal. Americans' diverse cultural roots are reflected in their clothing; however, sneakers, jeans, T-shirts, and baseball caps are emblematic of American styles. New York, with its Fashion Week, is considered to be one of the "Big Four" global fashion capitals, along with Paris, Milan, and London. A study demonstrated that general proximity to Manhattan's Garment District has been synonymous with American fashion since its inception in the early 20th century. A number of well-known designer labels, among them Tommy Hilfiger, Ralph Lauren, Tom Ford and Calvin Klein, are headquartered in Manhattan. Labels cater to niche markets, such as preteens. New York Fashion Week is one of the most influential fashion shows in the world, and is held twice each year in Manhattan; the annual Met Gala, also in Manhattan, has been called the fashion world's "biggest night". The U.S. film industry has a worldwide influence and following. Hollywood, a district in central Los Angeles, the nation's second-most populous city, is also metonymous for the American filmmaking industry. The major film studios of the United States are the primary source of the most commercially successful movies selling the most tickets in the world. Largely centered in the New York City region from its beginnings in the late 19th century through the first decades of the 20th century, the U.S. film industry has since been primarily based in and around Hollywood. Nonetheless, American film companies have been subject to the forces of globalization in the 21st century, and an increasing number of films are made elsewhere. The Academy Awards, popularly known as "the Oscars", have been held annually by the Academy of Motion Picture Arts and Sciences since 1929, and the Golden Globe Awards have been held annually since January 1944. The industry peaked in what is commonly referred to as the "Golden Age of Hollywood", from the early sound period until the early 1960s, with screen actors such as John Wayne and Marilyn Monroe becoming iconic figures. In the 1970s, "New Hollywood", or the "Hollywood Renaissance", was defined by grittier films influenced by French and Italian realist pictures of the post-war period. The 21st century has been marked by the rise of American streaming platforms, which came to rival traditional cinema. Early settlers were introduced by Native Americans to foods such as turkey, sweet potatoes, corn, squash, and maple syrup. Of the most enduring and pervasive examples are variations of the native dish called succotash. Early settlers and later immigrants combined these with foods they were familiar with, such as wheat flour, beef, and milk, to create a distinctive American cuisine. New World crops, especially pumpkin, corn, potatoes, and turkey as the main course are part of a shared national menu on Thanksgiving, when many Americans prepare or purchase traditional dishes to celebrate the occasion. Characteristic American dishes such as apple pie, fried chicken, doughnuts, french fries, macaroni and cheese, ice cream, hamburgers, hot dogs, and American pizza derive from the recipes of various immigrant groups. Mexican dishes such as burritos and tacos preexisted the United States in areas later annexed from Mexico, and adaptations of Chinese cuisine as well as pasta dishes freely adapted from Italian sources are all widely consumed. American chefs have had a significant impact on society both domestically and internationally. In 1946, the Culinary Institute of America was founded by Katharine Angell and Frances Roth. This would become the United States' most prestigious culinary school, where many of the most talented American chefs would study prior to successful careers. The United States restaurant industry was projected at $899 billion in sales for 2020, and employed more than 15 million people, representing 10% of the nation's workforce directly. It is the country's second-largest private employer and the third-largest employer overall. The United States is home to over 220 Michelin star-rated restaurants, 70 of which are in New York City. Wine has been produced in what is now the United States since the 1500s, with the first widespread production beginning in what is now New Mexico in 1628. In the modern U.S., wine production is undertaken in all fifty states, with California producing 84 percent of all U.S. wine. With more than 1,100,000 acres (4,500 km2) under vine, the United States is the fourth-largest wine-producing country in the world, after Italy, Spain, and France. The classic American diner, a casual restaurant type originally intended for the working class, emerged during the 19th century from converted railroad dining cars made stationary. The diner soon evolved into purpose-built structures whose number expanded greatly in the 20th century. The American fast-food industry developed alongside the nation's car culture. American restaurants developed the drive-in format in the 1920s, which they began to replace with the drive-through format by the 1940s. American fast-food restaurant chains, such as McDonald's, Burger King, Chick-fil-A, Kentucky Fried Chicken, Dunkin' Donuts and many others, have numerous outlets around the world. The most popular spectator sports in the U.S. are American football, basketball, baseball, soccer, and ice hockey. Their premier leagues are, respectively, the National Football League, the National Basketball Association, Major League Baseball, Major League Soccer, and the National Hockey League, All these leagues enjoy wide-ranging domestic media coverage and, except for the MLS, all are considered the preeminent leagues in their respective sports in the world. While most major U.S. sports such as baseball and American football have evolved out of European practices, basketball, volleyball, skateboarding, and snowboarding are American inventions, many of which have become popular worldwide. Lacrosse and surfing arose from Native American and Native Hawaiian activities that predate European contact. The market for professional sports in the United States was approximately $69 billion in July 2013, roughly 50% larger than that of Europe, the Middle East, and Africa combined. American football is by several measures the most popular spectator sport in the United States. Although American football does not have a substantial following in other nations, the NFL does have the highest average attendance (67,254) of any professional sports league in the world. In the year 2024, the NFL generated over $23 billion, making them the most valued professional sports league in the United States and the world. Baseball has been regarded as the U.S. "national sport" since the late 19th century. The most-watched individual sports in the U.S. are golf and auto racing, particularly NASCAR and IndyCar. On the collegiate level, earnings for the member institutions exceed $1 billion annually, and college football and basketball attract large audiences, as the NCAA March Madness tournament and the College Football Playoff are some of the most watched national sporting events. In the U.S., the intercollegiate sports level serves as the main feeder system for professional and Olympic sports, with significant exceptions such as Minor League Baseball. This differs greatly from practices in nearly all other countries, where publicly and privately funded sports organizations serve this function. Eight Olympic Games have taken place in the United States. The 1904 Summer Olympics in St. Louis, Missouri, were the first-ever Olympic Games held outside of Europe. The Olympic Games will be held in the U.S. for a ninth time when Los Angeles hosts the 2028 Summer Olympics. U.S. athletes have won a total of 2,968 medals (1,179 gold) at the Olympic Games, the most of any country. In other international competition, the United States is the home of a number of prestigious events, including the America's Cup, World Baseball Classic, the U.S. Open, and the Masters Tournament. The U.S. men's national soccer team has qualified for eleven World Cups, while the women's national team has won the FIFA Women's World Cup and Olympic soccer tournament four and five times, respectively. The 1999 FIFA Women's World Cup was hosted by the United States. Its final match was attended by 90,185, setting the world record for largest women's sporting event crowd at the time. The United States hosted the 1994 FIFA World Cup and will co-host, along with Canada and Mexico, the 2026 FIFA World Cup. See also Notes References This article incorporates text from a free content work. Licensed under CC BY-SA IGO 3.0 (license statement/permission). Text taken from World Food and Agriculture – Statistical Yearbook 2023​, FAO, FAO. External links 40°N 100°W / 40°N 100°W / 40; -100 (United States of America)
========================================
[SOURCE: https://en.wikipedia.org/wiki/Elon_Musk#cite_note-536] | [TOKENS: 10515]
Contents Elon Musk Elon Reeve Musk (/ˈiːlɒn/ EE-lon; born June 28, 1971) is a businessman and entrepreneur known for his leadership of Tesla, SpaceX, Twitter, and xAI. Musk has been the wealthiest person in the world since 2025; as of February 2026,[update] Forbes estimates his net worth to be around US$852 billion. Born into a wealthy family in Pretoria, South Africa, Musk emigrated in 1989 to Canada; he has Canadian citizenship since his mother was born there. He received bachelor's degrees in 1997 from the University of Pennsylvania before moving to California to pursue business ventures. In 1995, Musk co-founded the software company Zip2. Following its sale in 1999, he co-founded X.com, an online payment company that later merged to form PayPal, which was acquired by eBay in 2002. Musk also became an American citizen in 2002. In 2002, Musk founded the space technology company SpaceX, becoming its CEO and chief engineer; the company has since led innovations in reusable rockets and commercial spaceflight. Musk joined the automaker Tesla as an early investor in 2004 and became its CEO and product architect in 2008; it has since become a leader in electric vehicles. In 2015, he co-founded OpenAI to advance artificial intelligence (AI) research, but later left; growing discontent with the organization's direction and their leadership in the AI boom in the 2020s led him to establish xAI, which became a subsidiary of SpaceX in 2026. In 2022, he acquired the social network Twitter, implementing significant changes, and rebranding it as X in 2023. His other businesses include the neurotechnology company Neuralink, which he co-founded in 2016, and the tunneling company the Boring Company, which he founded in 2017. In November 2025, a Tesla pay package worth $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Musk was the largest donor in the 2024 U.S. presidential election, where he supported Donald Trump. After Trump was inaugurated as president in early 2025, Musk served as Senior Advisor to the President and as the de facto head of the Department of Government Efficiency (DOGE). After a public feud with Trump, Musk left the Trump administration and returned to managing his companies. Musk is a supporter of global far-right figures, causes, and political parties. His political activities, views, and statements have made him a polarizing figure. Musk has been criticized for COVID-19 misinformation, promoting conspiracy theories, and affirming antisemitic, racist, and transphobic comments. His acquisition of Twitter was controversial due to a subsequent increase in hate speech and the spread of misinformation on the service, following his pledge to decrease censorship. His role in the second Trump administration attracted public backlash, particularly in response to DOGE. The emails he sent to Jeffrey Epstein are included in the Epstein files, which were published between 2025–26 and became a topic of worldwide debate. Early life Elon Reeve Musk was born on June 28, 1971, in Pretoria, South Africa's administrative capital. He is of British and Pennsylvania Dutch ancestry. His mother, Maye (née Haldeman), is a model and dietitian born in Saskatchewan, Canada, and raised in South Africa. Musk therefore holds both South African and Canadian citizenship from birth. His father, Errol Musk, is a South African electromechanical engineer, pilot, sailor, consultant, emerald dealer, and property developer, who partly owned a rental lodge at Timbavati Private Nature Reserve. His maternal grandfather, Joshua N. Haldeman, who died in a plane crash when Elon was a toddler, was an American-born Canadian chiropractor, aviator and political activist in the technocracy movement who moved to South Africa in 1950. Elon has a younger brother, Kimbal, a younger sister, Tosca, and four paternal half-siblings. Musk was baptized as a child in the Anglican Church of Southern Africa. Despite both Elon and Errol previously stating that Errol was a part owner of a Zambian emerald mine, in 2023, Errol recounted that the deal he made was to receive "a portion of the emeralds produced at three small mines". Errol was elected to the Pretoria City Council as a representative of the anti-apartheid Progressive Party and has said that his children shared their father's dislike of apartheid. After his parents divorced in 1979, Elon, aged around 9, chose to live with his father because Errol Musk had an Encyclopædia Britannica and a computer. Elon later regretted his decision and became estranged from his father. Elon has recounted trips to a wilderness school that he described as a "paramilitary Lord of the Flies" where "bullying was a virtue" and children were encouraged to fight over rations. In one incident, after an altercation with a fellow pupil, Elon was thrown down concrete steps and beaten severely, leading to him being hospitalized for his injuries. Elon described his father berating him after he was discharged from the hospital. Errol denied berating Elon and claimed, "The [other] boy had just lost his father to suicide, and Elon had called him stupid. Elon had a tendency to call people stupid. How could I possibly blame that child?" Elon was an enthusiastic reader of books, and had attributed his success in part to having read The Lord of the Rings, the Foundation series, and The Hitchhiker's Guide to the Galaxy. At age ten, he developed an interest in computing and video games, teaching himself how to program from the VIC-20 user manual. At age twelve, Elon sold his BASIC-based game Blastar to PC and Office Technology magazine for approximately $500 (equivalent to $1,600 in 2025). Musk attended Waterkloof House Preparatory School, Bryanston High School, and then Pretoria Boys High School, where he graduated. Musk was a decent but unexceptional student, earning a 61/100 in Afrikaans and a B on his senior math certification. Musk applied for a Canadian passport through his Canadian-born mother to avoid South Africa's mandatory military service, which would have forced him to participate in the apartheid regime, as well as to ease his path to immigration to the United States. While waiting for his application to be processed, he attended the University of Pretoria for five months. Musk arrived in Canada in June 1989, connected with a second cousin in Saskatchewan, and worked odd jobs, including at a farm and a lumber mill. In 1990, he entered Queen's University in Kingston, Ontario. Two years later, he transferred to the University of Pennsylvania, where he studied until 1995. Although Musk has said that he earned his degrees in 1995, the University of Pennsylvania did not award them until 1997 – a Bachelor of Arts in physics and a Bachelor of Science in economics from the university's Wharton School. He reportedly hosted large, ticketed house parties to help pay for tuition, and wrote a business plan for an electronic book-scanning service similar to Google Books. In 1994, Musk held two internships in Silicon Valley: one at energy storage startup Pinnacle Research Institute, which investigated electrolytic supercapacitors for energy storage, and another at Palo Alto–based startup Rocket Science Games. In 1995, he was accepted to a graduate program in materials science at Stanford University, but did not enroll. Musk decided to join the Internet boom of the 1990s, applying for a job at Netscape, to which he reportedly never received a response. The Washington Post reported that Musk lacked legal authorization to remain and work in the United States after failing to enroll at Stanford. In response, Musk said he was allowed to work at that time and that his student visa transitioned to an H1-B. According to numerous former business associates and shareholders, Musk said he was on a student visa at the time. Business career In 1995, Musk, his brother Kimbal, and Greg Kouri founded the web software company Zip2 with funding from a group of angel investors. They housed the venture at a small rented office in Palo Alto. Replying to Rolling Stone, Musk denounced the notion that they started their company with funds borrowed from Errol Musk, but in a tweet, he recognized that his father contributed 10% of a later funding round. The company developed and marketed an Internet city guide for the newspaper publishing industry, with maps, directions, and yellow pages. According to Musk, "The website was up during the day and I was coding it at night, seven days a week, all the time." To impress investors, Musk built a large plastic structure around a standard computer to create the impression that Zip2 was powered by a small supercomputer. The Musk brothers obtained contracts with The New York Times and the Chicago Tribune, and persuaded the board of directors to abandon plans for a merger with CitySearch. Musk's attempts to become CEO were thwarted by the board. Compaq acquired Zip2 for $307 million in cash in February 1999 (equivalent to $590,000,000 in 2025), and Musk received $22 million (equivalent to $43,000,000 in 2025) for his 7-percent share. In 1999, Musk co-founded X.com, an online financial services and e-mail payment company. The startup was one of the first federally insured online banks, and, in its initial months of operation, over 200,000 customers joined the service. The company's investors regarded Musk as inexperienced and replaced him with Intuit CEO Bill Harris by the end of the year. The following year, X.com merged with online bank Confinity to avoid competition. Founded by Max Levchin and Peter Thiel, Confinity had its own money-transfer service, PayPal, which was more popular than X.com's service. Within the merged company, Musk returned as CEO. Musk's preference for Microsoft software over Unix created a rift in the company and caused Thiel to resign. Due to resulting technological issues and lack of a cohesive business model, the board ousted Musk and replaced him with Thiel in 2000.[b] Under Thiel, the company focused on the PayPal service and was renamed PayPal in 2001. In 2002, PayPal was acquired by eBay for $1.5 billion (equivalent to $2,700,000,000 in 2025) in stock, of which Musk—the largest shareholder with 11.72% of shares—received $175.8 million (equivalent to $320,000,000 in 2025). In 2017, Musk purchased the domain X.com from PayPal for an undisclosed amount, stating that it had sentimental value. In 2001, Musk became involved with the nonprofit Mars Society and discussed funding plans to place a growth-chamber for plants on Mars. Seeking a way to launch the greenhouse payloads into space, Musk made two unsuccessful trips to Moscow to purchase intercontinental ballistic missiles (ICBMs) from Russian companies NPO Lavochkin and Kosmotras. Musk instead decided to start a company to build affordable rockets. With $100 million of his early fortune, (equivalent to $180,000,000 in 2025) Musk founded SpaceX in May 2002 and became the company's CEO and Chief Engineer. SpaceX attempted its first launch of the Falcon 1 rocket in 2006. Although the rocket failed to reach Earth orbit, it was awarded a Commercial Orbital Transportation Services program contract from NASA, then led by Mike Griffin. After two more failed attempts that nearly caused Musk to go bankrupt, SpaceX succeeded in launching the Falcon 1 into orbit in 2008. Later that year, SpaceX received a $1.6 billion NASA contract (equivalent to $2,400,000,000 in 2025) for Falcon 9-launched Dragon spacecraft flights to the International Space Station (ISS), replacing the Space Shuttle after its 2011 retirement. In 2012, the Dragon vehicle docked with the ISS, a first for a commercial spacecraft. Working towards its goal of reusable rockets, in 2015 SpaceX successfully landed the first stage of a Falcon 9 on a land platform. Later landings were achieved on autonomous spaceport drone ships, an ocean-based recovery platform. In 2018, SpaceX launched the Falcon Heavy; the inaugural mission carried Musk's personal Tesla Roadster as a dummy payload. Since 2019, SpaceX has been developing Starship, a reusable, super heavy-lift launch vehicle intended to replace the Falcon 9 and Falcon Heavy. In 2020, SpaceX launched its first crewed flight, the Demo-2, becoming the first private company to place astronauts into orbit and dock a crewed spacecraft with the ISS. In 2024, NASA awarded SpaceX an $843 million (equivalent to $865,000,000 in 2025) contract to build a spacecraft that NASA will use to deorbit the ISS at the end of its lifespan. In 2015, SpaceX began development of the Starlink constellation of low Earth orbit satellites to provide satellite Internet access. After the launch of prototype satellites in 2018, the first large constellation was deployed in May 2019. As of May 2025[update], over 7,600 Starlink satellites are operational, comprising 65% of all operational Earth satellites. The total cost of the decade-long project to design, build, and deploy the constellation was estimated by SpaceX in 2020 to be $10 billion (equivalent to $12,000,000,000 in 2025).[c] During the Russian invasion of Ukraine, Musk provided free Starlink service to Ukraine, permitting Internet access and communication at a yearly cost to SpaceX of $400 million (equivalent to $440,000,000 in 2025). However, Musk refused to block Russian state media on Starlink. In 2023, Musk denied Ukraine's request to activate Starlink over Crimea to aid an attack against the Russian navy, citing fears of a nuclear response. Tesla, Inc., originally Tesla Motors, was incorporated in July 2003 by Martin Eberhard and Marc Tarpenning. Both men played active roles in the company's early development prior to Musk's involvement. Musk led the Series A round of investment in February 2004; he invested $6.35 million (equivalent to $11,000,000 in 2025), became the majority shareholder, and joined Tesla's board of directors as chairman. Musk took an active role within the company and oversaw Roadster product design, but was not deeply involved in day-to-day business operations. Following a series of escalating conflicts in 2007 and the 2008 financial crisis, Eberhard was ousted from the firm.[page needed] Musk assumed leadership of the company as CEO and product architect in 2008. A 2009 lawsuit settlement with Eberhard designated Musk as a Tesla co-founder, along with Tarpenning and two others. Tesla began delivery of the Roadster, an electric sports car, in 2008. With sales of about 2,500 vehicles, it was the first mass production all-electric car to use lithium-ion battery cells. Under Musk, Tesla has since launched several well-selling electric vehicles, including the four-door sedan Model S (2012), the crossover Model X (2015), the mass-market sedan Model 3 (2017), the crossover Model Y (2020), and the pickup truck Cybertruck (2023). In May 2020, Musk resigned as chairman of the board as part of the settlement of a lawsuit from the SEC over him tweeting that funding had been "secured" for potentially taking Tesla private. The company has also constructed multiple lithium-ion battery and electric vehicle factories, called Gigafactories. Since its initial public offering in 2010, Tesla stock has risen significantly; it became the most valuable carmaker in summer 2020, and it entered the S&P 500 later that year. In October 2021, it reached a market capitalization of $1 trillion (equivalent to $1,200,000,000,000 in 2025), the sixth company in U.S. history to do so. Musk provided the initial concept and financial capital for SolarCity, which his cousins Lyndon and Peter Rive founded in 2006. By 2013, SolarCity was the second largest provider of solar power systems in the United States. In 2014, Musk promoted the idea of SolarCity building an advanced production facility in Buffalo, New York, triple the size of the largest solar plant in the United States. Construction of the factory started in 2014 and was completed in 2017. It operated as a joint venture with Panasonic until early 2020. Tesla acquired SolarCity for $2 billion in 2016 (equivalent to $2,700,000,000 in 2025) and merged it with its battery unit to create Tesla Energy. The deal's announcement resulted in a more than 10% drop in Tesla's stock price; at the time, SolarCity was facing liquidity issues. Multiple shareholder groups filed a lawsuit against Musk and Tesla's directors, stating that the purchase of SolarCity was done solely to benefit Musk and came at the expense of Tesla and its shareholders. Tesla directors settled the lawsuit in January 2020, leaving Musk the sole remaining defendant. Two years later, the court ruled in Musk's favor. In 2016, Musk co-founded Neuralink, a neurotechnology startup, with an investment of $100 million. Neuralink aims to integrate the human brain with artificial intelligence (AI) by creating devices that are embedded in the brain. Such technology could enhance memory or allow the devices to communicate with software. The company also hopes to develop devices to treat neurological conditions like spinal cord injuries. In 2022, Neuralink announced that clinical trials would begin by the end of the year. In September 2023, the Food and Drug Administration approved Neuralink to initiate six-year human trials. Neuralink has conducted animal testing on macaques at the University of California, Davis. In 2021, the company released a video in which a macaque played the video game Pong via a Neuralink implant. The company's animal trials—which have caused the deaths of some monkeys—have led to claims of animal cruelty. The Physicians Committee for Responsible Medicine has alleged that Neuralink violated the Animal Welfare Act. Employees have complained that pressure from Musk to accelerate development has led to botched experiments and unnecessary animal deaths. In 2022, a federal probe was launched into possible animal welfare violations by Neuralink.[needs update] In 2017, Musk founded the Boring Company to construct tunnels; he also revealed plans for specialized, underground, high-occupancy vehicles that could travel up to 150 miles per hour (240 km/h) and thus circumvent above-ground traffic in major cities. Early in 2017, the company began discussions with regulatory bodies and initiated construction of a 30-foot (9.1 m) wide, 50-foot (15 m) long, and 15-foot (4.6 m) deep "test trench" on the premises of SpaceX's offices, as that required no permits. The Los Angeles tunnel, less than two miles (3.2 km) in length, debuted to journalists in 2018. It used Tesla Model Xs and was reported to be a rough ride while traveling at suboptimal speeds. Two tunnel projects announced in 2018, in Chicago and West Los Angeles, have been canceled. A tunnel beneath the Las Vegas Convention Center was completed in early 2021. Local officials have approved further expansions of the tunnel system. April 14, 2022 In early 2017, Musk expressed interest in buying Twitter and had questioned the platform's commitment to freedom of speech. By 2022, Musk had reached 9.2% stake in the company, making him the largest shareholder.[d] Musk later agreed to a deal that would appoint him to Twitter's board of directors and prohibit him from acquiring more than 14.9% of the company. Days later, Musk made a $43 billion offer to buy Twitter. By the end of April Musk had successfully concluded his bid for approximately $44 billion. This included approximately $12.5 billion in loans and $21 billion in equity financing. Having backtracked on his initial decision, Musk bought the company on October 27, 2022. Immediately after the acquisition, Musk fired several top Twitter executives including CEO Parag Agrawal; Musk became the CEO instead. Under Elon Musk, Twitter instituted monthly subscriptions for a "blue check", and laid off a significant portion of the company's staff. Musk lessened content moderation and hate speech also increased on the platform after his takeover. In late 2022, Musk released internal documents relating to Twitter's moderation of Hunter Biden's laptop controversy in the lead-up to the 2020 presidential election. Musk also promised to step down as CEO after a Twitter poll, and five months later, Musk stepped down as CEO and transitioned his role to executive chairman and chief technology officer (CTO). Despite Musk stepping down as CEO, X continues to struggle with challenges such as viral misinformation, hate speech, and antisemitism controversies. Musk has been accused of trying to silence some of his critics such as Twitch streamer Asmongold, who criticized him during one of his streams. Musk has been accused of removing their accounts' blue checkmarks, which hinders visibility and is considered a form of shadow banning, or suspending their accounts without justification. Other activities In August 2013, Musk announced plans for a version of a vactrain, and assigned engineers from SpaceX and Tesla to design a transport system between Greater Los Angeles and the San Francisco Bay Area, at an estimated cost of $6 billion. Later that year, Musk unveiled the concept, dubbed the Hyperloop, intended to make travel cheaper than any other mode of transport for such long distances. In December 2015, Musk co-founded OpenAI, a not-for-profit artificial intelligence (AI) research company aiming to develop artificial general intelligence, intended to be safe and beneficial to humanity. Musk pledged $1 billion of funding to the company, and initially gave $50 million. In 2018, Musk left the OpenAI board. Since 2018, OpenAI has made significant advances in machine learning. In July 2023, Musk launched the artificial intelligence company xAI, which aims to develop a generative AI program that competes with existing offerings like OpenAI's ChatGPT. Musk obtained funding from investors in SpaceX and Tesla, and xAI hired engineers from Google and OpenAI. December 16, 2022 Musk uses a private jet owned by Falcon Landing LLC, a SpaceX-linked company, and acquired a second jet in August 2020. His heavy use of the jets and the consequent fossil fuel usage have received criticism. Musk's flight usage is tracked on social media through ElonJet. In December 2022, Musk banned the ElonJet account on Twitter, and made temporary bans on the accounts of journalists that posted stories regarding the incident, including Donie O'Sullivan, Keith Olbermann, and journalists from The New York Times, The Washington Post, CNN, and The Intercept. In October 2025, Musk's company xAI launched Grokipedia, an AI-generated online encyclopedia that he promoted as an alternative to Wikipedia. Articles on Grokipedia are generated and reviewed by xAI's Grok chatbot. Media coverage and academic analysis described Grokipedia as frequently reusing Wikipedia content but framing contested political and social topics in line with Musk's own views and right-wing narratives. A study by Cornell University researchers and NBC News stated that Grokipedia cites sources that are blacklisted or considered "generally unreliable" on Wikipedia, for example, the conspiracy site Infowars and the neo-Nazi forum Stormfront. Wired, The Guardian and Time criticized Grokipedia for factual errors and for presenting Musk himself in unusually positive terms while downplaying controversies. Politics Musk is an outlier among business leaders who typically avoid partisan political advocacy. Musk was a registered independent voter when he lived in California. Historically, he has donated to both Democrats and Republicans, many of whom serve in states in which he has a vested interest. Since 2022, his political contributions have mostly supported Republicans, with his first vote for a Republican going to Mayra Flores in the 2022 Texas's 34th congressional district special election. In 2024, he started supporting international far-right political parties, activists, and causes, and has shared misinformation and numerous conspiracy theories. Since 2024, his views have been generally described as right-wing. Musk supported Barack Obama in 2008 and 2012, Hillary Clinton in 2016, Joe Biden in 2020, and Donald Trump in 2024. In the 2020 Democratic Party presidential primaries, Musk endorsed candidate Andrew Yang and expressed support for Yang's proposed universal basic income, and endorsed Kanye West's 2020 presidential campaign. In 2021, Musk publicly expressed opposition to the Build Back Better Act, a $3.5 trillion legislative package endorsed by Joe Biden that ultimately failed to pass due to unanimous opposition from congressional Republicans and several Democrats. In 2022, gave over $50 million to Citizens for Sanity, a conservative political action committee. In 2023, he supported Republican Ron DeSantis for the 2024 U.S. presidential election, giving $10 million to his campaign, and hosted DeSantis's campaign announcement on a Twitter Spaces event. From June 2023 to January 2024, Musk hosted a bipartisan set of X Spaces with Republican and Democratic candidates, including Robert F. Kennedy Jr., Vivek Ramaswamy, and Dean Phillips. In October 2025, former vice-president Kamala Harris commented that it was a mistake from the Democratic side to not invite Musk to a White House electric vehicle event organized in August 2021 and featuring executives from General Motors, Ford and Stellantis, despite Tesla being "the major American manufacturer of extraordinary innovation in this space." Fortune remarked that this was a nod to United Auto Workers and organized labor. Harris said presidents should put aside political loyalties when it came to recognizing innovation, and guessed that the non-invitation impacted Musk's perspective. Fortune noted that, at the time, Musk said, "Yeah, seems odd that Tesla wasn't invited." A month later, he criticized Biden as "not the friendliest administration." Jacob Silverman, author of the book Gilded Rage: Elon Musk and the Radicalization of Silicon Valley, said that the tech industry represented by Musk, Thiel, Andreessen and other capitalists, actually flourished under Biden, but the tech leaders chose Trump for their common ground on cultural issues. By early 2024, Musk had become a vocal and financial supporter of Donald Trump. In July 2024, minutes after the attempted assassination of Donald Trump, Musk endorsed him for president saying; "I fully endorse President Trump and hope for his rapid recovery." During the presidential campaign, Musk joined Trump on stage at a campaign rally, and during the campaign promoted conspiracy theories and falsehoods about Democrats, election fraud and immigration, in support of Trump. Musk was the largest individual donor of the 2024 election. In 2025, Musk contributed $19 million to the Wisconsin Supreme Court race, hoping to influence the state's future redistricting efforts and its regulations governing car manufacturers and dealers. In 2023, Musk said he shunned the World Economic Forum because it was boring. The organization commented that they had not invited him since 2015. He has participated in Dialog, dubbed "Tech Bilderberg" and organized by Peter Thiel and Auren Hoffman, though. Musk's international political actions and comments have come under increasing scrutiny and criticism, especially from the governments and leaders of France, Germany, Norway, Spain and the United Kingdom, particularly due to his position in the U.S. government as well as ownership of X. An NBC News analysis found he had boosted far-right political movements to cut immigration and curtail regulation of business in at least 18 countries on six continents since 2023. During his speech after the second inauguration of Donald Trump, Musk twice made a gesture interpreted by many as a Nazi or a fascist Roman salute.[e] He thumped his right hand over his heart, fingers spread wide, and then extended his right arm out, emphatically, at an upward angle, palm down and fingers together. He then repeated the gesture to the crowd behind him. As he finished the gestures, he said to the crowd, "My heart goes out to you. It is thanks to you that the future of civilization is assured." It was widely condemned as an intentional Nazi salute in Germany, where making such gestures is illegal. The Anti-Defamation League said it was not a Nazi salute, but other Jewish organizations disagreed and condemned the salute. American public opinion was divided on partisan lines as to whether it was a fascist salute. Musk dismissed the accusations of Nazi sympathies, deriding them as "dirty tricks" and a "tired" attack. Neo-Nazi and white supremacist groups celebrated it as a Nazi salute. Multiple European political parties demanded that Musk be banned from entering their countries. The concept of DOGE emerged in a discussion between Musk and Donald Trump, and in August 2024, Trump committed to giving Musk an advisory role, with Musk accepting the offer. In November and December 2024, Musk suggested that the organization could help to cut the U.S. federal budget, consolidate the number of federal agencies, and eliminate the Consumer Financial Protection Bureau, and that its final stage would be "deleting itself". In January 2025, the organization was created by executive order, and Musk was designated a "special government employee". Musk led the organization and was a senior advisor to the president, although his official role is not clear. In sworn statement during a lawsuit, the director of the White House Office of Administration stated that Musk "is not an employee of the U.S. DOGE Service or U.S. DOGE Service Temporary Organization", "is not the U.S. DOGE Service administrator", and has "no actual or formal authority to make government decisions himself". Trump said two days later that he had put Musk in charge of DOGE. A federal judge has ruled that Musk acted as the de facto leader of DOGE. Musk's role in the second Trump administration, particularly in response to DOGE, has attracted public backlash. He was criticized for his treatment of federal government employees, including his influence over the mass layoffs of the federal workforce. He has prioritized secrecy within the organization and has accused others of violating privacy laws. A Senate report alleged that Musk could avoid up to $2 billion in legal liability as a result of DOGE's actions. In May 2025, Bill Gates accused Musk of "killing the world's poorest children" through his cuts to USAID, which modeling by Boston University estimated had resulted in 300,000 deaths by this time, most of them of children. By November 2025, the estimated death toll had increased to 400,000 children and 200,000 adults. Musk announced on May 28, 2025, that he would depart from the Trump administration as planned when the special government employee's 130 day deadline expired, with a White House official confirming that Musk's offboarding from the Trump administration was already underway. His departure was officially confirmed during a joint Oval Office press conference with Trump on May 30, 2025. @realDonaldTrump is in the Epstein files. That is the real reason they have not been made public. June 5, 2025 After leaving office, Musk criticized the Trump administration's Big Beautiful Bill, calling it a "disgusting abomination" due to its provisions increasing the deficit. A feud began between Musk and Trump, with its most notable event being Musk alleging Trump had ties to sex offender Jeffrey Epstein on X (formerly Twitter) on June 5, 2025. Trump responded on Truth Social stating that Musk went "CRAZY" after the "EV Mandate" was purportedly taken away and threatened to cut Musk's government contracts. Musk then called for a third Trump impeachment. The next day, Trump stated that he did not wish to reconcile with Musk, and added that Musk would face "very serious consequences" if he funds Democratic candidates. On June 11, Musk publicly apologized for the tweets against Trump, saying they "went too far". Views November 6, 2022 Rejecting the conservative label, Musk has described himself as a political moderate, even as his views have become more right-wing over time. His views have been characterized as libertarian and far-right, and after his involvement in European politics, they have received criticism from world leaders such as Emmanuel Macron and Olaf Scholz. Within the context of American politics, Musk supported Democratic candidates up until 2022, at which point he voted for a Republican for the first time. He has stated support for universal basic income, gun rights, freedom of speech, a tax on carbon emissions, and H-1B visas. Musk has expressed concern about issues such as artificial intelligence (AI) and climate change, and has been a critic of wealth tax, short-selling, and government subsidies. An immigrant himself, Musk has been accused of being anti-immigration, and regularly blames immigration policies for illegal immigration. He is also a pronatalist who believes population decline is the biggest threat to civilization, and identifies as a cultural Christian. Musk has long been an advocate for space colonization, especially the colonization of Mars. He has repeatedly pushed for humanity colonizing Mars, in order to become an interplanetary species and lower the risks of human extinction. Musk has promoted conspiracy theories and made controversial statements that have led to accusations of racism, sexism, antisemitism, transphobia, disseminating disinformation, and support of white pride. While describing himself as a "pro-Semite", his comments regarding George Soros and Jewish communities have been condemned by the Anti-Defamation League and the Biden White House. Musk was criticized during the COVID-19 pandemic for making unfounded epidemiological claims, defying COVID-19 lockdowns restrictions, and supporting the Canada convoy protest against vaccine mandates. He has amplified false claims of white genocide in South Africa. Musk has been critical of Israel's actions in the Gaza Strip during the Gaza war, praised China's economic and climate goals, suggested that Taiwan and China should resolve cross-strait relations, and was described as having a close relationship with the Chinese government. In Europe, Musk expressed support for Ukraine in 2022 during the Russian invasion, recommended referendums and peace deals on the annexed Russia-occupied territories, and supported the far-right Alternative for Germany political party in 2024. Regarding British politics, Musk blamed the 2024 UK riots on mass migration and open borders, criticized Prime Minister Keir Starmer for what he described as a "two-tier" policing system, and was subsequently attacked as being responsible for spreading misinformation and amplifying the far-right. He has also voiced his support for far-right activist Tommy Robinson and pledged electoral support for Reform UK. In February 2026, Musk described Spanish Prime Minister Pedro Sánchez as a "tyrant" following Sánchez's proposal to prohibit minors under the age of 16 from accessing social media platforms. Legal affairs In 2018, Musk was sued by the U.S. Securities and Exchange Commission (SEC) for a tweet stating that funding had been secured for potentially taking Tesla private.[f] The securities fraud lawsuit characterized the tweet as false, misleading, and damaging to investors, and sought to bar Musk from serving as CEO of publicly traded companies. Two days later, Musk settled with the SEC, without admitting or denying the SEC's allegations. As a result, Musk and Tesla were fined $20 million each, and Musk was forced to step down for three years as Tesla chairman but was able to remain as CEO. Shareholders filed a lawsuit over the tweet, and in February 2023, a jury found Musk and Tesla not liable. Musk has stated in interviews that he does not regret posting the tweet that triggered the SEC investigation. In 2019, Musk stated in a tweet that Tesla would build half a million cars that year. The SEC reacted by asking a court to hold him in contempt for violating the terms of the 2018 settlement agreement. A joint agreement between Musk and the SEC eventually clarified the previous agreement details, including a list of topics about which Musk needed preclearance. In 2020, a judge blocked a lawsuit that claimed a tweet by Musk regarding Tesla stock price ("too high imo") violated the agreement. Freedom of Information Act (FOIA)-released records showed that the SEC concluded Musk had subsequently violated the agreement twice by tweeting regarding "Tesla's solar roof production volumes and its stock price". In October 2023, the SEC sued Musk over his refusal to testify a third time in an investigation into whether he violated federal law by purchasing Twitter stock in 2022. In February 2024, Judge Laurel Beeler ruled that Musk must testify again. In January 2025, the SEC filed a lawsuit against Musk for securities violations related to his purchase of Twitter. In January 2024, Delaware judge Kathaleen McCormick ruled in a 2018 lawsuit that Musk's $55 billion pay package from Tesla be rescinded. McCormick called the compensation granted by the company's board "an unfathomable sum" that was unfair to shareholders. The Delaware Supreme Court overturned McCormick's decision in December 2025, restoring Musk's compensation package and awarding $1 in nominal damages. Personal life Musk became a U.S. citizen in 2002. From the early 2000s until late 2020, Musk resided in California, where both Tesla and SpaceX were founded. He then relocated to Cameron County, Texas, saying that California had become "complacent" about its economic success. While hosting Saturday Night Live in 2021, Musk stated that he has Asperger syndrome (an outdated term for autism spectrum disorder). When asked about his experience growing up with Asperger's syndrome in a TED2022 conference in Vancouver, Musk stated that "the social cues were not intuitive ... I would just tend to take things very literally ... but then that turned out to be wrong — [people were not] simply saying exactly what they mean, there's all sorts of other things that are meant, and [it] took me a while to figure that out." Musk suffers from back pain and has undergone several spine-related surgeries, including a disc replacement. In 2000, he contracted a severe case of malaria while on vacation in South Africa. Musk has stated he uses doctor-prescribed ketamine for occasional depression and that he doses "a small amount once every other week or something like that"; since January 2024, some media outlets have reported that he takes ketamine, marijuana, LSD, ecstasy, mushrooms, cocaine and other drugs. Musk at first refused to comment on his alleged drug use, before responding that he had not tested positive for drugs, and that if drugs somehow improved his productivity, "I would definitely take them!". The New York Times' investigations revealed Musk's overuse of ketamine and numerous other drugs, as well as strained family relationships and concerns from close associates who have become troubled by his public behavior as he became more involved in political activities and government work. According to The Washington Post, President Trump described Musk as "a big-time drug addict". Through his own label Emo G Records, Musk released a rap track, "RIP Harambe", on SoundCloud in March 2019. The following year, he released an EDM track, "Don't Doubt Ur Vibe", featuring his own lyrics and vocals. Musk plays video games, which he stated has a "'restoring effect' that helps his 'mental calibration'". Some games he plays include Quake, Diablo IV, Elden Ring, and Polytopia. Musk once claimed to be one of the world's top video game players but has since admitted to "account boosting", or cheating by hiring outside services to achieve top player rankings. Musk has justified the boosting by claiming that all top accounts do it so he has to as well to remain competitive. In 2024 and 2025, Musk criticized the video game Assassin's Creed Shadows and its creator Ubisoft for "woke" content. Musk posted to X that "DEI kills art" and specified the inclusion of the historical figure Yasuke in the Assassin's Creed game as offensive; he also called the game "terrible". Ubisoft responded by saying that Musk's comments were "just feeding hatred" and that they were focused on producing a game not pushing politics. Musk has fathered at least 14 children, one of whom died as an infant. The Wall Street Journal reported in 2025 that sources close to Musk suggest that the "true number of Musk's children is much higher than publicly known". He had six children with his first wife, Canadian author Justine Wilson, whom he met while attending Queen's University in Ontario, Canada; they married in 2000. In 2002, their first child Nevada Musk died of sudden infant death syndrome at the age of 10 weeks. After his death, the couple used in vitro fertilization (IVF) to continue their family; they had twins in 2004, followed by triplets in 2006. The couple divorced in 2008 and have shared custody of their children. The elder twin he had with Wilson came out as a trans woman and, in 2022, officially changed her name to Vivian Jenna Wilson, adopting her mother's surname because she no longer wished to be associated with Musk. Musk began dating English actress Talulah Riley in 2008. They married two years later at Dornoch Cathedral in Scotland. In 2012, the couple divorced, then remarried the following year. After briefly filing for divorce in 2014, Musk finalized a second divorce from Riley in 2016. Musk then dated the American actress Amber Heard for several months in 2017; he had reportedly been "pursuing" her since 2012. In 2018, Musk and Canadian musician Grimes confirmed they were dating. Grimes and Musk have three children, born in 2020, 2021, and 2022.[g] Musk and Grimes originally gave their eldest child the name "X Æ A-12", which would have violated California regulations as it contained characters that are not in the modern English alphabet; the names registered on the birth certificate are "X" as a first name, "Æ A-Xii" as a middle name, and "Musk" as a last name. They received criticism for choosing a name perceived to be impractical and difficult to pronounce; Musk has said the intended pronunciation is "X Ash A Twelve". Their second child was born via surrogacy. Despite the pregnancy, Musk confirmed reports that the couple were "semi-separated" in September 2021; in an interview with Time in December 2021, he said he was single. In October 2023, Grimes sued Musk over parental rights and custody of X Æ A-Xii. Elon Musk has taken X Æ A-Xii to multiple official events in Washington, D.C. during Trump's second term in office. Also in July 2022, The Wall Street Journal reported that Musk allegedly had an affair with Nicole Shanahan, the wife of Google co-founder Sergey Brin, in 2021, leading to their divorce the following year. Musk denied the report. Musk also had a relationship with Australian actress Natasha Bassett, who has been described as "an occasional girlfriend". In October 2024, The New York Times reported Musk bought a Texas compound for his children and their mothers, though Musk denied having done so. Musk also has four children with Shivon Zilis, director of operations and special projects at Neuralink: twins born via IVF in 2021, a child born in 2024 via surrogacy and a child born in 2025.[h] On February 14, 2025, Ashley St. Clair, an influencer and author, posted on X claiming to have given birth to Musk's son Romulus five months earlier, which media outlets reported as Musk's supposed thirteenth child.[i] On February 22, 2025, it was reported that St Clair had filed for sole custody of her five-month-old son and for Musk to be recognised as the child's father. On March 31, 2025, Musk wrote that, while he was unsure if he was the father of St. Clair's child, he had paid St. Clair $2.5 million and would continue paying her $500,000 per year.[j] Later reporting from the Wall Street Journal indicated that $1 million of these payments to St. Clair were structured as a loan. In 2014, Musk and Ghislaine Maxwell appeared together in a photograph taken at an Academy Awards after-party, which Musk later described as a "photobomb". The January 2026 Epstein files contain emails between Musk and Epstein from 2012 to 2013, after Epstein's first conviction. Emails released on January 30, 2026, indicated that Epstein invited Musk to visit his private island on multiple occasions. The correspondence showed that while Epstein repeatedly encouraged Musk to attend, Musk did not visit the island. In one instance, Musk discussed the possibility of attending a party with his then-wife Talulah Riley and asked which day would be the "wildest party"; according to the emails, the visit did not take place after Epstein later cancelled the plans.[k] On Christmas day in 2012, Musk emailed Epstein asking "Do you have any parties planned? I’ve been working to the edge of sanity this year and so, once my kids head home after Christmas, I really want to hit the party scene in St Barts or elsewhere and let loose. The invitation is much appreciated, but a peaceful island experience is the opposite of what I’m looking for". Epstein replied that the "ratio on my island" might make Musk's wife uncomfortable to which Musk responded, "Ratio is not a problem for Talulah". On September 11, 2013, Epstein sent an email asking Musk if he had any plans for coming to New York for the opening of the United Nations General Assembly where many "interesting people" would be coming to his house to which Musk responded that "Flying to NY to see UN diplomats do nothing would be an unwise use of time". Epstein responded by stating "Do you think i am retarded. Just kidding, there is no one over 25 and all very cute." Musk has denied any close relationship with Epstein and described him as a "creep" who attempted to ingratiate himself with influential people. When Musk was asked in 2019 if he introduced Epstein to Mark Zuckerberg, Musk responded: "I don’t recall introducing Epstein to anyone, as I don’t know the guy well enough to do so." The released emails nonetheless showed cordial exchanges on a range of topics, including Musk's inquiry about parties on the island. The correspondence also indicated that Musk suggested hosting Epstein at SpaceX, while Epstein separately discussed plans to tour SpaceX and bring "the girls", though there is no evidence that such a visit occurred. Musk has described the release of the files a "distraction", later accusing the second Trump administration of suppressing them to protect powerful individuals, including Trump himself.[l] Wealth Elon Musk is the wealthiest person in the world, with an estimated net worth of US$690 billion as of January 2026, according to the Bloomberg Billionaires Index, and $852 billion according to Forbes, primarily from his ownership stakes in SpaceX and Tesla. Having been first listed on the Forbes Billionaires List in 2012, around 75% of Musk's wealth was derived from Tesla stock in November 2020, although he describes himself as "cash poor". According to Forbes, he became the first person in the world to achieve a net worth of $300 billion in 2021; $400 billion in December 2024; $500 billion in October 2025; $600 billion in mid-December 2025; $700 billion later that month; and $800 billion in February 2026. In November 2025, a Tesla pay package worth potentially $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Public image Although his ventures have been highly influential within their separate industries starting in the 2000s, Musk only became a public figure in the early 2010s. He has been described as an eccentric who makes spontaneous and impactful decisions, while also often making controversial statements, contrary to other billionaires who prefer reclusiveness to protect their businesses. Musk's actions and his expressed views have made him a polarizing figure. Biographer Ashlee Vance described people's opinions of Musk as polarized due to his "part philosopher, part troll" persona on Twitter. He has drawn denouncement for using his platform to mock the self-selection of personal pronouns, while also receiving praise for bringing international attention to matters like British survivors of grooming gangs. Musk has been described as an American oligarch due to his extensive influence over public discourse, social media, industry, politics, and government policy. After Trump's re-election, Musk's influence and actions during the transition period and the second presidency of Donald Trump led some to call him "President Musk", the "actual president-elect", "shadow president" or "co-president". Awards for his contributions to the development of the Falcon rockets include the American Institute of Aeronautics and Astronautics George Low Transportation Award in 2008, the Fédération Aéronautique Internationale Gold Space Medal in 2010, and the Royal Aeronautical Society Gold Medal in 2012. In 2015, he received an honorary doctorate in engineering and technology from Yale University and an Institute of Electrical and Electronics Engineers Honorary Membership. Musk was elected a Fellow of the Royal Society (FRS) in 2018.[m] In 2022, Musk was elected to the National Academy of Engineering. Time has listed Musk as one of the most influential people in the world in 2010, 2013, 2018, and 2021. Musk was selected as Time's "Person of the Year" for 2021. Then Time editor-in-chief Edward Felsenthal wrote that, "Person of the Year is a marker of influence, and few individuals have had more influence than Musk on life on Earth, and potentially life off Earth too." Notes References Works cited Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Al-Mujaydil] | [TOKENS: 1425]
Contents Al-Mujaydil Al-Mujaydil (Arabic: المْجيدل (also: al-Mujeidil) was an Arab-Palestinian village located 6 km southwest of Nazareth. Al-Mujaydil was one of a few towns that achieved local council status by the Mandatory Palestine government. In 1945, the village had a population of 1,900 and total land area of 18,836 dunams – mostly Arab-owned. The population was partly Christian and the town contained a Roman Catholic church and monastery. After the 1948 depopulation of Palestine, it was destroyed and overbuilt by Migdal HaEmek. History Traces of a Roman road was found close to the village, which may indicate that the region was opened to intensive settlements as early as Roman times. In the 1596 tax records, Al-Mujaydil was part of the Ottoman Empire, nahiyah (subdistrict) of Tabariyya under the Sanjak Safad, with a population of 4 Muslim families. The villagers paid a fixed tax rate of 25% on various agricultural products, including wheat and barley, fruit trees, as well as on goats and beehives; a total of 3,295 akçe. Half of the revenue went to a Waqf. In 1799 it was named Magidel in the map of Pierre Jacotin. C.R. Conder, of the PEF's Survey of Western Palestine, camped by the place in the 1870s, and described the village as a place being visited by missionaries. The village was also described as being "flourishing", and built of stone and mud. It was on the northern side of a small plateau, and olive groves were cultivated to the south and to the east. The population size was estimated at 800 (in 1859), and they cultivated 100 faddans. In 1882, Grand Duke Sergei Alexandrovich of Russia, the brother of the Russian Tsar, visited the village, and donated money for the construction of a Russian Orthodox Church there in the hope that local Christians would be converted to the Orthodox faith. However, the Patriarch of Jerusalem Nikodim opened the church to all denominations in the village and ensured it functioned most of the time as a village school. A population list from about 1887 showed that el Mujeidel had about 1,000 inhabitants; "for the greater part Muslims". In 1903, a Roman Catholic church was built in the village. It housed on its first floor a trilingual school for boys and girls, (teaching was in Arabic, Italian and French). It also housed a local clinic for the benefit of the villagers. According to the British Mandate's 1922 census of Palestine, Mujaidel had 1,009 inhabitants; 817 Muslims and 192 Christians, where 150 of the Christians were Orthodox, 33 Roman Catholics, 2 were Melkite and 7 were Anglicans. In 1930, the al-Huda mosque was built in the village, it was 12 meters high and 8 meters wide. A kuttab was nearby. The mosque was famous for the elaborate system it used to collect rainfall from its roof into a well. A tall minaret was added in the 1940s. By the 1931 census the population had increased to 1,241; 1,044 Muslims and 197 Christians, in a total of 293 houses. In the 1945 statistics the population of Mujeidil was 1,900; 1,640 Muslims and 260 Christians, with a total of 18,836 dunams of land, according to an official land and population survey. Of this, 1,719 dunams of land were for plantations and irrigable land, 15,474 for cereals, while 34 dunams were built-up land. Al-Mujaydil was occupied and captured by the Haganah's Golani Brigade during second half of Operation Dekel on 15 July 1948. The attack included a bombing raid by Israeli planes. Most of the population fled to the nearby city of Nazareth, where they live as internal refugees. In August 1948, a Jezreel Battalion Golani patrol encountered "groups of Arab women working fields" near Al-Mujaydil, and they reported that: "I [squad OC Shalom Lipman] ordered the machine-gun to fire three bursts over their heads, to drive them off. They fled in the direction of the olive grove...". But after the patrol left, the villagers returned. The patrol came back and encountered "a group of Arab men and women... I opened fire and killed a Palestinian man and one man and one woman were injured. In the two incidents, I expended 31 bullets." The following day, 6 August, the same patrol encountered two Arab funeral processions. The commander remarked dryly that "one can only assume that one of yesterday ´s wounded died." A day or two after, the patrol again encountered "a large group of Arab women in the fields of Mujeidil. When we approached them to drive them off, an Arab male [was found] hiding near them, [and] he was executed by us. The women were warned not to return to this area of Mujeidil." The company commander's commented: "Arab women repeatedly attempt to return to Mujeidil, and they are usually accompanied by men. I gave firm orders to stymie every attempt [lehasel kol nisayon] to return to the village of Mujeidil." However, in 1950, after intervention from the Pope Pius XII, the Palestinian Christians of the village were offered the opportunity to move back to the village, but refused to do so without their Muslim neighbours. Israel then destroyed half of the houses and one of the village mosques. The Israeli town of Migdal HaEmek was founded by Iranian Jews in 1952 on the Palestinian destroyed village land, less than 1 km southwest of the village site. Yifat, established in 1926 on what were traditionally village land, is 2 km to the west of the site of Al-Mujaydil. The Palestinian historian Walid Khalidi, described the remains of the village in 1992: "Most of the site is covered with a pine forest that serves as an Israeli park. The monastery and parts of the ( destroyed) church are the only remaining buildings on the site; monks still live in the monastery. Remnants of destroyed houses and the walls of a cemetery are visible. Cactuses and pomegranate, olive, and fig trees grow around the site, which is dotted with wells." Notable descendants Diana Buttu. References Bibliography External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Comparison_of_open-source_and_closed-source_software] | [TOKENS: 2031]
Contents Comparison of open-source and closed-source software Free/open-source software – the source availability model used by free and open-source software (FOSS) – and closed source are two approaches to the distribution of software. Background Under the closed-source model source code is not released to the public. Closed-source software is maintained by a team who produces their product in a compiled-executable state, which is what the market is allowed access to. Microsoft, the owner and developer of Windows and Microsoft Office, along with other major software companies, have long been proponents of this business model, although in August 2010, Microsoft interoperability general manager Jean Paoli said Microsoft "loves open source" and its anti-open-source position was a mistake. The FOSS model allows for able users to view and modify a product's source code, but most of such code is not in the public domain. Common advantages cited by proponents for having such a structure are expressed in terms of trust, acceptance, teamwork and quality. A non-free license is used to limit what free software movement advocates consider to be the essential freedoms. A license, whether providing open-source code or not, that does not stipulate the "four software freedoms", are not considered "free" by the free software movement. A closed source license is one that limits only the availability of the source code. By contrast a copyleft license claims to protect the "four software freedoms" by explicitly granting them and then explicitly prohibiting anyone to redistribute the package or reuse the code in it to make derivative works without including the same licensing clauses. Some licenses grant the four software freedoms but allow redistributors to remove them if they wish. Such licenses are sometimes called permissive software licenses. An example of such a license is the FreeBSD License which allows derivative software to be distributed as non-free or closed source, as long as they give credit to the original designers. A misconception that is often made by both proponents and detractors of FOSS is that it cannot be capitalized. FOSS can and has been commercialized by companies such as Red Hat, Canonical, Mozilla, Google, IBM, Novell, Sun/Oracle, VMware and others. Commercialization The primary business model for closed-source software involves the use of constraints on what can be done with the software and the restriction of access to the original source code. This can result in a form of imposed artificial scarcity on a product that is otherwise very easy to copy and redistribute. The result is that an end-user is not actually purchasing software, but purchasing the right to use the software. To this end, the source code to closed-source software is considered a trade secret by its manufacturers. FOSS methods, on the other hand, typically do not limit the use of software in this fashion. Instead, the revenue model is based mainly on support services. Red Hat Inc. and Canonical Ltd. are such companies that give its software away freely, but charge for support services. The source code of the software is usually given away, and pre-compiled binary software frequently accompanies it for convenience. As a result, the source code can be freely modified. However, there can be some license-based restrictions on re-distributing the software. Generally, software can be modified and re-distributed for free, as long as credit is given to the original manufacturer of the software. In addition, FOSS can generally be sold commercially, as long as the source-code is provided. There are a wide variety of free software licenses that define how a program can be used, modified, and sold commercially (see GPL, LGPL, and BSD-type licenses). FOSS may also be funded through donations. A software philosophy that combines aspects of FOSS and proprietary software is open core software, or commercial open source software. Despite having received criticism from some proponents of FOSS, it has exhibited marginal success. Examples of open core software include MySQL and VirtualBox. The MINIX operating system used to follow this business model, but came under the full terms of the BSD license after the year 2000. This model has proved somewhat successful, as witnessed in the Linux community. There are numerous Linux distributions available, but a great many of them are simply modified versions of some previous version. For example, Fedora Linux, Mandriva Linux, and PCLinuxOS are all derivatives of an earlier product, Red Hat Linux. In fact, Red Hat Enterprise Linux is itself a derivative of Fedora Linux. This is an example of one vendor creating a product, allowing a third-party to modify the software, and then creating a tertiary product based on the modified version. All of the products listed above are currently produced by software service companies. Operating systems built on the Linux kernel are available for a wider range of processor architectures than Microsoft Windows, including PowerPC and SPARC. None of these can match the sheer popularity of the x86 architecture, nevertheless they do have significant numbers of users; Windows remains unavailable for these alternative architectures, although there have been such ports of it in the past. The most obvious complaint against FOSS revolves around the fact that making money through some traditional methods, such as the sale of the use of individual copies and patent royalty payments, is much more difficult and sometimes impractical with FOSS. Moreover, FOSS has been considered damaging to the commercial software market, evidenced in documents released as part of the Microsoft Halloween documents leak. The cost of making a copy of a software program is essentially zero, so per-use fees are perhaps unreasonable for open-source software. At one time, open-source software development was almost entirely volunteer-driven, and although this is true for many small projects, many alternative funding streams have been identified and employed for FOSS: Increasingly, FOSS is developed by commercial organizations. In 2004, Andrew Morton noted that 37,000 of the 38,000 recent patches in the Linux kernel were created by developers directly paid to develop the Linux kernel. Many projects, such as the X Window System and Apache, have had commercial development as a primary source of improvements since their inception. This trend has accelerated over time.[citation needed] There are some[who?] who counter that the commercialization of FOSS is a poorly devised business model because commercial FOSS companies answer to parties with opposite agendas. On one hand commercial FOSS companies answer to volunteers developers, who are difficult to keep on a schedule, and on the other hand they answer to shareholders, who are expecting a return on their investment. Often FOSS development is not on a schedule and therefore it may have an adverse effect on a commercial FOSS company releasing software on time. Innovation Gary Hamel counters this claim by saying that quantifying who or what is innovative is impossible. The implementation of compatible FOSS replacements for proprietary software is encouraged by the Free Software Foundation to make it possible for their users to use FOSS instead of proprietary software, for example they have listed GNU Octave, an API-compatible replacement for MATLAB, as one of their high priority projects. In the past this list contained free binary compatible Java and CLI implementations, like GNU Classpath and DotGNU. Thus even "derivative" developments are important in the opinion of many people from FOSS. However, there is no quantitative analysis, if FOSS is less innovative than proprietary software, since there are derivative/re-implementing proprietary developments, too. Some of the largest well-known FOSS projects are either legacy code (e.g., FreeBSD or Apache) developed a long time ago independently of the free software movement, or by companies like Netscape (which open-sourced its code with the hope that they could compete better), or by companies like MySQL which use FOSS to lure customers for its more expensive licensed product. However, it is notable that most of these projects have seen major or even complete rewrites (in the case of the Mozilla and Apache 2 code, for example) and do not contain much of the original code. Innovations have come, and continue to come, from the open-source world: In 2008, the Department of Management Science and Technology in the Athens University of Economics and Business published an analysis of the FreeBSD, Linux, Solaris, and Windows operating system kernels which looked for differences between code developed using open-source and proprietary processes. The study collected metrics in the areas of file organization, code structure, code style, the use of the C preprocessor, and data organization. The aggregate results indicated that they scored comparably to each other. Another study conducted by Synopsys published in 2014 found open source code to be of better quality. A study done on seventeen open-source and closed-source software showed that the number of vulnerabilities existing in a piece of software is not affected by the source availability model that it uses. The study used a very simple metrics of comparing the number of vulnerabilities between the open-source and closed-source software. Another study was also done by a group of professors in Northern Kentucky University on fourteen open-source web applications written in PHP. The study measured the vulnerability density in the web applications and shown that some of them had increased vulnerability density, but some of them also had decreased vulnerability density. Business models In its 2008 Annual Report, Microsoft stated that FOSS business models challenge its license-based software model and that the firms who use these business models do not bear the cost for their software development[clarification needed]. The company also stated in the report: Some of these [open source software] firms may build upon Microsoft ideas that we provide to them free or at low royalties in connection with our interoperability initiatives. To the extent open source software gains increasing market acceptance, our sales, revenue and operating margins may decline. Open source software vendors are devoting considerable efforts to developing software that mimics the features and functionality of our products, in some cases on the basis of technical specifications for Microsoft technologies that we make available. In response to competition, we are developing versions of our products with basic functionality that are sold at lower prices than the standard versions. There are numerous business models for open source companies which can be found in the literature. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_ref-105] | [TOKENS: 10728]
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Bill_of_lading] | [TOKENS: 2693]
Contents Bill of lading A bill of lading (/ˈleɪdɪŋ/) (sometimes abbreviated as B/L or BOL) is a document issued by a carrier (or their agent) to acknowledge receipt of cargo for shipment. Although the term is historically related only to carriage by sea, a bill of lading may today be used for any type of carriage of goods. Bills of lading are one of three crucial documents used in international trade to ensure that exporters receive payment and importers receive the merchandise. The other two documents are a policy of insurance and an invoice.[a] Whereas a bill of lading is negotiable, both a policy and an invoice are assignable. In international trade outside the United States, bills of lading are distinct from waybills in that the latter are not transferable and do not confer title. Nevertheless, the UK Carriage of Goods by Sea Act 1992 grants "all rights of suit under the contract of carriage" to the lawful holder of a bill of lading, or to the consignee under a sea waybill or a ship's delivery order. A bill of lading must be transferable,[b] and serves three main functions: Typical export transactions use Incoterms terms such as CIF, FOB or FAS, requiring the exporter/shipper to deliver the goods to the ship, whether onboard or alongside. Nevertheless, the loading itself will usually be done by the carrier or by a third party stevedore. Description A bill of lading is a standard-form document which is transferable by endorsement (or by lawful transfer of possession). Most shipments by sea are covered by the Hague Rules, the Hague-Visby Rules or the Hamburg Rules, which require the carrier to issue the shipper a bill of lading identifying the nature, quantity, quality and leading marks (identification marks and numbers) of the goods. In the United Kingdom, in the case of Coventry v Gladstone (1867), Lord Justice Blackburn defined a bill of lading as "A writing signed on behalf of the owner of ship in which goods are embarked, acknowledging the receipt of the Goods, and undertaking to deliver them at the end of the voyage, subject to such conditions as may be mentioned in the bill of lading." Therefore, it can be stated that the bill of lading was introduced to provide a receipt to the shipper in the absence of the owners. In Glyn Mills & Co. v. East and West India Dock Co (1882), which concerned the presentation of a series of bills of lading, the decision also encompassed any document which "by mercantile law and usage ... is a symbol of the right of property in the goods". Although the term "bill of lading" is well known and well understood, the proposed Rotterdam Rules use the term "transport documents", of which bills of lading and seaway bills are examples. History While there is evidence of the existence of receipts for goods loaded aboard merchant vessels stretching back as far as Roman times, the practice of recording cargo aboard ship in the ship's log is almost as long-lived as shipping itself. The modern bill of lading only came into use with the growth of international trade in the medieval world. The growth of mercantilism (which produced other financial innovations such as the charterparty or carta partita, the bill of exchange and the insurance policy) produced a requirement for a title document that could be traded in much the same way as the goods themselves. It was this new avenue of trade that produced the bill of lading in much the same format as currently used. The word "lading" means "loading", both words being derived from the Old English word hladan. "Lading" specifically refers to the loading of cargo aboard a ship. The Dutch word "lading" has exactly the same meaning (freight, cargo, an amount of transportable goods) as it has in the English "bill of lading", but is not restricted to shipping. Under English law, the Carriage of Goods by Sea Act 1992 provides that the term "bill of lading" includes a "received-for-shipment" bill of lading issued by, say, a freight forwarder or a storage depot/warehouse. A "combined bill of lading" may be issued by a carrier who, say, collects goods from a factory for subsequent delivery to a ship via multi-modal transport. Roles and purposes of bill of lading The principal use of the bill of lading is as a receipt issued by the carrier once the goods have been loaded onto the vessel. This receipt can be used as proof of shipment for customs and insurance purposes, and also as commercial proof of completing a contractual obligation, especially under INCOTERMS such as CFR[e] (cost and freight) and FOB (free on board). Although the Hague-Visby Rules provide that a bill of lading is only prima facie evidence of receipt, the Carriage of Goods by Sea Act 1992 s.4 declares a BoL "conclusive evidence of receipt".[f] The bill of lading from the carrier to the shipper can be used as evidence of the contract of carriage by the fact that the carrier has received the goods and upon the receipt, the carrier would deliver the goods. In this case, the bill of lading would be used as evidence of contract of carriage. In this case, the bill of lading can be used if the shipper does not properly ship the goods then the shipper cannot receive the bill of lading from the carrier. Eventually, the shipper would have to deliver the bill of lading to the seller. In this case, the bill of lading is used as evidence of contract of carriage between seller and carrier. However, when the bill of lading is negotiated to a bona fide third party then the bill of lading becomes conclusive evidence where no contradictory evidence can be introduced. It is because the third party cannot examine the actual shipment and can only pay attention to the document itself, not survey or examination of the shipment itself. However, the bill of lading will rarely be the contract itself, since the cargo space will have been booked previously, perhaps by telephone, email or letter. The preliminary contract will be acknowledged by both the shipper and carrier to incorporate the carrier's standard terms of business. If the Hague-Visby Rules apply, then all of the Rules will be automatically annexed to the bill of lading, thus forming a statutory contract. The bill of lading is not a contract of carriage as it is only signed by the carrier. Yet, it acts as evidence of contract due to the activities taken place between the shipper and the consignee. When the bill of lading is used as a document of title, it is particularly related to the case of the buyer. When the buyer is entitled to receive goods from the carrier, the bill of lading in this case performs as a document of title for the goods. In simple words, the function of BL as a document of title shows who owns the cargo. Whoever has the duly endorsed BL is the rightful owner of the cargo described in the BL. Carrier becomes responsible before the law if they issue cargo to a party who is not the authorised person to claim the goods under this function. Further, if the BL is a "Seaway BL" document of title function will not be applicable. Simply, the bill of lading confers prima facie title over the goods to the named consignee or lawful holder. Under the "nemo dat quod non habet" rule ("no one gives what he doesn't have"), a seller cannot pass better title than he himself has; so if the goods are subject to an encumbrance (such as a mortgage, charge or hypothec), or even stolen, the bill of lading will not grant full title to the holder. Types of bills of lading Bills of lading may take various forms, such as on-board and received-for-shipment. Bills of lading and charterparties compared A charterparty is the contract governing the relationship between the shipowner and the charterer. The bill of lading governs the relationship between the shipper and the carrier (who will be either a shipowner or a demise charterer). If the exporter (the shipper) is shipping a small amount of cargo, he will arrange for a carrier to carry the goods for him, using a bill of lading. If the exporter needs the whole (or a very substantial part) of the ship's cargo capacity, the exporter may need to charter the vessel, and he will enter into a charterparty agreement with the shipowner. If the charter party is a time or voyage charterparty, the shipowner will still have control of the ship and its crew. If there is a demise (or "bareboat") charterparty, the charterer will effectively have a long lease and will have full control of the vessel. When the master (captain) issues a B/L to a shipper, he will be acting as an agent for the carrier, who will be either the shipowner (time or voyage) or the charterer (demise). In a time-charterparty or voyage-charterparty, if the charterer is shipping his own cargo (rather than the cargo of a third party) he will receive a bill of lading from the master, acting as agent of the shipowner; but that B/L will serve solely as a receipt and document of title, and its terms will (subject to contrary intent) be secondary to the terms of the charterparty, which remains the dominant contract. Sea waybills and electronic data interchange (EDI) Under Art. III of the Hague-Visby Rules, a carrier must, on demand, provide the shipper with a bill of lading; but if the shipper agrees, a lesser document such as a "sea waybill" may be issued instead. In recent years, the use of bills of lading has declined, and they have tended to be replaced with the sea waybill. (If a so-called bill of lading is declared to be "non-negotiable", then it is not a true B/L, and instead will be treated as a sea waybill.) The main difference between these two documents is that the waybill gives the bearer the right to possession of the cargo, but does not confer title in the goods. As a result, there is no need for the physical document to be presented for the goods to be released. The carrier will automatically release the goods to the consignee once the import formalities have been completed. This results in a much smoother flow of trade, and has allowed shipping lines to move towards electronic data interchange which may greatly ease the flow of global trade.[citation needed] For some time, it has been the case that the cargo may arrive at the destination before the bill of lading; and a practice has arisen for the shipper (having sent the bill of lading to the banks for checking) to send to the consignee a letter of indemnity (LOI) which can be presented to the carrier in exchange for the cargo. The LOI indemnifies the carrier against any cargo claim, but the document is not transferable and has no established legal status. For letter of credit and documentary collection transactions, it is important to retain title to the goods until the transaction is complete. This means that the bill of lading still remains a vital document within international trade. Alternatively, to overcome the possibility of the goods reaching the destination ahead of the cargo, majority of the Shipping Lines offer an "Express release" service (formerly known as "Telex release"). By surrendering the full set of bills of lading issues at the port of loading, the shipping line can instruct the port of discharge to release the cargo without the physical presentation of bills of lading at destination.[citation needed] For many years, the industry has sought a solution to the difficulties, costs and inefficiencies associated with paper bills of lading. One answer is to make the bill an electronic document. An electronic bill of lading (or eB/L) is the legal and functional equivalent of a paper bill of lading. An electronic bill of lading must replicate the core functions of a paper bill of lading, namely its functions as a receipt, as evidence of or containing the contract of carriage and as a document of title.[citation needed] The UNCITRAL Model Law on Electronic Transferable Records enables the issuance of bills of lading in electronic form that are functionally equivalent to paper-based ones. As a result, electronic bills of lading may be issued in the jurisdictions that have enacted that Model Law. These are Singapore and Bahrain. [citation needed] Besides that, German law allows the usage of electronic Bills of Lading and other documents of title, see sec. 516 of the German Commercial Code. The German principle of functional equivalence matches with the MLETR. In the United Kingdom the Electronic Trade Documents Act, enacted in July 2023; made the usage of electronic Bills of lading legal. See also Footnotes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Investing.com] | [TOKENS: 1318]
Contents Investing.com Investing.com is a financial data and news platform founded in 2007. Based in Israel, it ranks among the most visited financial websites globally. The site provides free access to real-time quotes, analysis, and market tools in over 30 languages. Its business model relies primarily on advertising, with additional revenue from premium subscriptions launched in 2022. The company has been owned by Hong Kong–based investment firm Joffre Capital since 2021. Investing.com aggregates content from external sources and has increasingly adopted AI tools for content generation. It has drawn criticism for publishing AI-generated articles that closely mirror original reporting by other outlets without proper attribution, and for allegedly passing user contact data to unregulated offshore brokers. History The company was founded in 2007 by Israeli entrepreneur Dror Efrat, Russian Lonny Szneiberg and two other co-founders, initially under the name Forexpros. The company was registered and headquartered in Cyprus, but the majority of employees and the CEO resided in Israel. At the beginning, the portal offered forex analysis, a broker directory, and a discussion forum in English, Spanish, Hebrew and Arabic. During 2008–2009, more editions were added, and the platform expanded its offering from Forex data to encompass other financial instruments. The content was free and all the revenue came from the advertisement. The company was growing rapidly, a part of its success is explained by the 2008 economic crisis and extreme market volatility. As of Q2 2012 Forexpros had 2.7 mln unique visitors and 35 mln pageviews per month, and a headcount of 70. In late 2012, ForexPros was rebranded Investing.com. The company purchased its current domain, Investing.com, for $2.45 mln in December 2012 in one of the most expensive domain deals in history. At that point, Investing.com reported having 46 mln visitors and over 400 mln sessions per month, with a daily audience of 5 mln mobile users and 3 bln pageviews. Investing.com's Android app was launched in September 2013, while an iOS version of the app was launched a year later. Over time, it added additional localized editions and services. Investing.com launched a new version of their cryptocurrency app in 2017. In April 2018, Mickey Winitsky replaced Itay Gissin as CEO. Winitsky announced a bold expansion strategy for 2018-19. In July 2018, Investing.com signed a memorandum-of-understanding with TokenPost, the largest cryptocurrency news website in South Korea. In September 2018, Investing.com launched AllRates, a personal finance website. In July 2019, Investing.com launched Investing Money, its personal finance vertical. AllRates, launched a year earlier, was merged into Investing Money. As of 2019, Investing.com had six offices worldwide and 300 employees across Tel Aviv, Madrid, Milan, Tokyo, Mumbai, Seoul, Shenzhen, São Paulo, and Mexico City. In November 2019, Omer Shvili was appointed CEO, succeeding the founder Dror Efrat. Shvili intended to scale down side projects recently introduced by his predecessors and to concentrate the company's activity on its core business. In February 2021, Investing.com was ranked as the 194th most popular website in the world, according to Alexa, and one of the top three financial portals. As of April 2021, Investing.com was available in 24 languages and reported more than 199 million monthly visits. According to its founders, Investing.com's business model focused on aggregating news and data from hundreds of sources rather than independent journalism, which made it easier to scale up to new languages. In 2021, the company was acquired by Hong Kong–based Joffre Capital for $500 million. Following the acquisition, Ding’an Fei was appointed chairman of Investing.com. After receiving tax incentives in Israel as a tech company, Investing.com moved its management to Israel and expanded its operations there. In May 2022, Investing.com launched its premium subscription InvestingPro. In March 2023, the company acquired Streetinsider.com, a Michigan-based stock market news and analysis service provider. Mostly, StreetInsider materials became available on Investing.com free of charge, however some were transferred to the paid content category for InvestingPro subscribers. By that point, Investing.com had reached 60 million unique visitors across 136 countries. In 2023, Ding’an Fei announced the company's plans to go public within the next five years. In the first half of 2025, more than 50% of Investing.com's content was created by AI, the content is available in 33 languages. In April 2025, Investing.com launched WarrenAI, an AI-powered financial assistant. Business model and revenues In a 2016 interview to Finance Magnates Dror Efrat explained that the success of Investing.com was in its multilingual offering. As the platform always relied on content aggregation and not on content creation, it was easier and cheaper to break out of the English bubble. Globally, at that point Investing.com was the only multilingual player in the mobile segment. At the end of 2019 FY, the company reported revenues of $60 mln. Efrat stated that Investing.com makes all its money from advertising. According to former CEO Shlomi Biger, their advertisers were "financial brokerage firms, banks, ad-networks, and luxury brands". Investing.com is widely criticized for using content by other media without crediting or at least linking sources. Its AI-powered articles are edited by a human staff, however the resemblance is always on the verge of direct plagiarizing. Strong suspicions of copyright law violations are voiced by the journalists. In his 2023 investigation, Max Tani of Semafor discovered that original analytical articles from The Motley Fool, FXStreet, and Cryptonewsland.com were published by Investing.com just a few hours after their release and with minimal changes, but without any indication of authorship. Though Investing.com actively promotes its core idea of offering premium services for free, in 2022 it launched a paid subscription and put some content behind a paywall. Investing.com has also faced criticism for allegedly monetizing user registrations by passing contact details to unlicensed offshore brokers, leading to unsolicited marketing calls. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Grapevine,_Texas] | [TOKENS: 3205]
Contents Grapevine, Texas Grapevine is a city located in northeast Tarrant County, Texas, United States, with minor portions extending into Dallas and Denton counties. Its population was 50,631 in the 2020 census, up from 46,334 in the 2010 census. The city is located in the Mid-Cities suburban region between Dallas and Fort Worth and includes a larger portion of Dallas/Fort Worth International Airport than other cities. The city is adjacent to Grapevine Lake, a large reservoir impounded by the Army Corps of Engineers in 1952 that serves as a source of water and a recreational area. History In October 1843, General Sam Houston and fellow Republic of Texas commissioners camped at Tah-Wah-Karro Creek, also known as Grape Vine Springs, to meet with leaders of 10 Indian nations. This meeting culminated in the signing of a treaty of "peace, friendship, and commerce", which opened the area for homesteaders. The settlement that emerged was named Grape Vine due to its location on the appropriately named Grape Vine Prairie near Grape Vine Springs, both names in homage to the wild grapes that grew in the area. Grapevine is the oldest settlement in Tarrant County,[citation needed] established in 1844, before Texas joined the Union in 1846. The first recorded white settlement in what would become the modern city occurred in the late 1840s and early 1850s. General Richard Montgomery Gano owned property near Grape Vine and helped organize the early settlement against Comanche raiding parties before leading his band of volunteers to battle in the American Civil War. Growth during the 19th century was slow but steady; by 1890, Grape Vine had about 800 residents supported by such amenities as a newspaper, a public school, several cotton gins, a post office, and railroad service. The settlement made continued gains early in the 20th century and on January 12, 1914, the post office altered the town's name to one word: Grapevine. On Easter Sunday, April 1, 1934, Henry Methvin, an associate of Bonnie Parker and Clyde Barrow, killed two police officers, E.B. Wheeler and H.D. Murphy, during an altercation near Grapevine. A historical marker remains at the intersection of Dove Road and State Highway 114. Grapevine's population fell during the interwar period, as the economy stagnated,[citation needed] though the city was officially incorporated in 1936. Cotton was the primary crop for Grapevine until the early 20th century, when it was overtaken by cantaloupe farms that accounted for 25,000 acres. For several decades, until the early 1970s, the Rotary Club sign outside of town boasted Grapevine as the "Cantaloupe Capital of the World". Population growth and economic gains resumed to some extent in the decades after World War II. The opening of Dallas–Fort Worth International Airport in 1974 spurred massive development. Grapevine depended heavily upon agricultural production prior to the mid-20th century, but transformed into a regional center of commerce because of its proximity to the airport's north entrance. In recent years, several wineries have opened in Grapevine, and the city has been active in maintaining its historic downtown corridor.[citation needed] Geography According to the United States Census Bureau, the city has a total area of 35.9 square miles (93 km2), of which 32.3 mi2 (84 km2) are land and 3.6 mi2 (9.3 km2) are covered by water. Demographics As of the 2020 census, Grapevine had a population of 50,631. The median age was 38.9 years. 21.0% of residents were under the age of 18 and 13.3% of residents were 65 years of age or older. For every 100 females there were 97.0 males, and for every 100 females age 18 and over there were 95.5 males age 18 and over. 99.9% of residents lived in urban areas, while 0.1% lived in rural areas. There were 20,953 households in Grapevine, of which 28.9% had children under the age of 18 living in them. Of all households, 49.2% were married-couple households, 19.8% were households with a male householder and no spouse or partner present, and 24.7% were households with a female householder and no spouse or partner present. About 28.9% of all households were made up of individuals and 7.4% had someone living alone who was 65 years of age or older. There were 22,236 housing units, of which 5.8% were vacant. The homeowner vacancy rate was 1.3% and the rental vacancy rate was 7.6%. At the 2010 census, 46,334 people, 18,223 households, and 12,332 families were residing in the city. The population density was 1,451 people per square mile. The racial makeup of the city was 81.1% White, 3.3% African American, 0.7% Native American, 4.5% Asian, 8.0% from other races, and 2.2% from two or more races. Hispanic or Latino of any race were 18.0% of the population. Of the 18,223 households in 2010, 33.6% had children under 18 living with them, 51.9% were married couples living together, 10.5% had a single householder with no spouse present, and 33.3% were not families. About 27.1% of all households were made up of individuals, and 14.4% had someone living alone who was 65 or older. The average household size was 2.49, and the average family size was 3.06. The age distribution in the city was 25.1% under 18, 74.9% over the age of 18, 5.6% from 20 to 24, 13.3% from 25 to 34, 24.7% from 35 to 49, 20.9% from 50 to 64, and 7.9% who were 65 or older. The median age was 37.5 years. According to a 2010 estimate, the median household income was $76,040, and the median family income was $93,587. Males had a median income of $66,378 versus $47,995 for females. The per capita income was $38,304. About 5.2% of families and 7.9% of the population were below the poverty line, including 11.3% of those under age 18 and 5.5% of those age 65 or over. Data provided by the city's Economic Development Department show a general upward trend in population, with an estimated population of 54,578 as of 2020. The median age in the city was estimated at 36 years old, with more than half of residents obtaining an associate's degree or higher. Median household income had also increased to $88,225. Government Grapevine uses a council–manager government, consisting of an elected city council, composed of the mayor and six at-large councilmembers, with a city manager appointed by the council. The current city manager is Bruno Rumbelow. The government is a voluntary member of the North Central Texas Council of Governments. The mayor is William D. Tate. Grapevine, located in conservative Northeast Tarrant County, has voted Republican in all elections. The city almost entirely lies within the boundaries of Texas House District 98 and Texas Senate Districts 9 and 12, with very small portions lying within Texas House Districts 63 and 115 and Texas Senate Districts 10 and 16. Education The Grapevine-Colleyville Independent School District serves most of the city. The district operates 11 elementary schools (prekindergarten through grade 5), four middle schools (grades 6–8), and two high schools (grades 9–12). Colleyville Heritage High School and Grapevine High School both draw students from different areas of Grapevine. Northwestern Grapevine lies inside Carroll Independent School District, while smaller portions are served by Lewisville Independent School District and Coppell Independent School District. The Faith Christian School, a private school, is also in Grapevine. Economy Grapevine's economy is largely centered around travel and tourism, although those sectors also promote strong growth in other areas such as entertainment, retail trade, and transportation. Travelers arriving to and departing from Dallas/Fort Worth International Airport make up the majority of the city's visitors. The Gaylord Texan and Great Wolf Lodge are the two biggest hotels in Grapevine and among the biggest in the Metroplex. The hotels also have large convention centers and entertainment venues. In 2020, Coury Hospitality launched Hotel Vin, a new boutique hotel attached to the recently finished TEXRail station. Nearby Grapevine Mills Mall is a regional outlet shopping center with many amenities, including a movie theater. Embassy Suites Grapevine and the DFW Lakes Hilton complex also lay adjacent to Grapevine Mills and Bass Pro Shops. In addition to these areas, Main Street in historic downtown Grapevine is a popular attraction. Public amenities like City Hall, the Grapevine Convention and Visitor's Bureau, the city library, public parks, and a recreation center are located on Main Street, nestled in between a wealth of small businesses. These include antique stores, restaurants, bars, theaters, and many specialty shops. The Grapevine Vintage Railroad follows a historic route between Grapevine and the Fort Worth Stockyards, departing from a station on South Main Street. The city is also the home of several wineries and tasting rooms including Umbra Winery as well as the Texas Wine and Grape Growers Association. According to the city's 2022 Annual Comprehensive Financial Report, the city's top employers are: GameStop, a national electronics retailer and one of the city's largest corporate employers, is headquartered in Grapevine. In April 2017, Kubota Corp. established a new U.S. headquarters in Grapevine, moving about 300 employees from California and spending $50 million. The facility at 1639 West 23rd Street is on the property of DFW Airport and in Grapevine. Tenants include China Airlines, Lufthansa Cargo, and the U.S. Fish and Wildlife Service. Historically, Grapevine was the headquarters of a collection of now-defunct air carriers. In 1978 Braniff Place, the final world headquarters for Braniff International Airways, was built in what is now Grapevine, on the grounds of Dallas/Fort Worth International Airport. Following Braniff's 1982 bankruptcy, the structure is now known as Verizon Place. In the 1990s Metro Airlines maintained its main offices in the city of Grapevine, as did Kitty Hawk Aircargo for a time. Transportation Two grade-separated highways run through the city. State Highways 114 and 121 trisect Grapevine south and slightly west of downtown. 121 runs from the south and 114 from the northwest. The highways intersect near Mustang Drive and William D. Tate Avenue and continue together towards the airport before splitting again at the north entrance of Dallas/Fort Worth International Airport. Grapevine's highways 2010 underwent a significant overhaul to improve traffic flow through the area. Dallas/Fort Worth International Airport is the main provider of air service to Grapevine and the region, providing connections to places around the state, country, and abroad. DFW is the main hub for American Airlines, though other major carriers maintain a large presence. Love Field in Dallas is relatively close to Grapevine. The Grapevine Vintage Railroad provides service to and from Historic Fort Worth Stockyards along the former Cotton Belt Railroad right-of-way. The service acts more as a tourist attraction due to its slow speeds. However, the city's 50-year commitment to the Trinity Metro and approval of a half-cent sales tax increase have paid dividends through the introduction of the TEXRail service to northeast Tarrant County since January 10, 2019. New train stations downtown and north of the airport are included in the plans, as is a connection to Dallas Area Rapid Transit (DART) light rail provides mass transit service to the eastern half of the Dallas–Fort Worth metroplex. TEXRail service is available in Grapevine at both DFW Airport North Station, and Grapevine Main Street Station. DART Silver Line Service is expected to begin at DFW Airport North Station by mid-2026. The Convention and Visitor's Bureau operates the Grapevine Visitor's Shuttle between points of interest within the city. Additional information including stops and pricing can be found here. The majority of Grapevine's transportation infrastructure is centered around the automobile, though amenities for bicycles can be found. A bicycle route runs along the length of Dove Road beginning at the intersection of Dove and North Main Street, connecting Grapevine and Southlake. Additionally, the Cotton Belt trail runs parallel to State Highway 26, from the Colleyville city limits to downtown Grapevine. The "Dallas Road" project will stretch over 1.5 miles to extend the Cotton Belt Trail with a 10-foot wide trail section along the north side of western Dallas road between William D Tate Avenue, Ball Street, and Dooley Street. A 10-foot-wide trail will also be added along the east side of Dooley north from Dallas Road to the Dallas Area Rapid Transit right of way. A 12-foot-trail section will be added east from Dooley along the north side of the DART rail corridor to Texan Trail. This will provide a very wide cement trail from Colleyville to the far east side of Grapevine. Other bicycle paths can be found at the various city parks, most notably the trail from Parr Park to Bear Creek Park. Off-road trails are also available. The Northshore Mountain Bike trail begins at Rockledge Park on the north side of Grapevine Lake and continues into Flower Mound along the shore. Mileage is 22.5 miles broken up into two major loops: East Loops, 1 – 4, which are 12.5 miles, and the West Loops, 5 – 7. Horseshoe Trail begins at Catfish Lane, continues to Dove Road, and loops back to the trailhead, for a total of 5.4 miles. Grapevine received the Runner Friendly Community designation from the Road Runners Club of America. Grapevine has approximately 24 miles of hike and bike trails that link parks, schools, and businesses. The hike and bike trails have mileage markers that also have GPS coordinates for location identification in case of emergencies. The city also has an indoor 1/8 mile walking/jogging track and several outdoor tracks that belong to the local school district. The city has a joint-use agreement with the school district for the use of school facilities. The hike and bike trails in Grapevine include water fountains, community bathrooms or portable toilets, available parking, signs linking pedestrian networks, mile markers, walk lights at busy intersections, stop signs at residential intersections, and painted crosswalks. One trail in Grapevine links with four other communities, creating an additional 11-mile trail. Media Notable people Places Gallery Sister cities Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Meta_Platforms#cite_ref-yahoo_49-0] | [TOKENS: 8626]
Contents Meta Platforms Meta Platforms, Inc. (doing business as Meta) is an American multinational technology company headquartered in Menlo Park, California. Meta owns and operates several prominent social media platforms and communication services, including Facebook, Instagram, WhatsApp, Messenger, Threads and Manus. The company also operates an advertising network for its own sites and third parties; as of 2023[update], advertising accounted for 97.8 percent of its total revenue. Meta has been described as a part of Big Tech, which refers to the largest six tech companies in the United States, Alphabet (Google), Amazon, Apple, Meta (Facebook), Microsoft, and Nvidia, which are also the largest companies in the world by market capitalization. The company was originally established in 2004 as TheFacebook, Inc., and was renamed Facebook, Inc. in 2005. In 2021, it rebranded as Meta Platforms, Inc. to reflect a strategic shift toward developing the metaverse—an interconnected digital ecosystem spanning virtual and augmented reality technologies. In 2023, Meta was ranked 31st on the Forbes Global 2000 list of the world's largest public companies. As of 2022, it was the world's third-largest spender on research and development, with R&D expenses totaling US$35.3 billion. History Facebook filed for an initial public offering (IPO) on January 1, 2012. The preliminary prospectus stated that the company sought to raise $5 billion, had 845 million monthly active users, and a website accruing 2.7 billion likes and comments daily. After the IPO, Zuckerberg would retain 22% of the total shares and 57% of the total voting power in Facebook. Underwriters valued the shares at $38 each, valuing the company at $104 billion, the largest valuation yet for a newly public company. On May 16, one day before the IPO, Facebook announced it would sell 25% more shares than originally planned due to high demand. The IPO raised $16 billion, making it the third-largest in US history (slightly ahead of AT&T Mobility and behind only General Motors and Visa). The stock price left the company with a higher market capitalization than all but a few U.S. corporations—surpassing heavyweights such as Amazon, McDonald's, Disney, and Kraft Foods—and made Zuckerberg's stock worth $19 billion. The New York Times stated that the offering overcame questions about Facebook's difficulties in attracting advertisers to transform the company into a "must-own stock". Jimmy Lee of JPMorgan Chase described it as "the next great blue-chip". Writers at TechCrunch, on the other hand, expressed skepticism, stating, "That's a big multiple to live up to, and Facebook will likely need to add bold new revenue streams to justify the mammoth valuation." Trading in the stock, which began on May 18, was delayed that day due to technical problems with the Nasdaq exchange. The stock struggled to stay above the IPO price for most of the day, forcing underwriters to buy back shares to support the price. At the closing bell, shares were valued at $38.23, only $0.23 above the IPO price and down $3.82 from the opening bell value. The opening was widely described by the financial press as a disappointment. The stock set a new record for trading volume of an IPO. On May 25, 2012, the stock ended its first full week of trading at $31.91, a 16.5% decline. On May 22, 2012, regulators from Wall Street's Financial Industry Regulatory Authority announced that they had begun to investigate whether banks underwriting Facebook had improperly shared information only with select clients rather than the general public. Massachusetts Secretary of State William F. Galvin subpoenaed Morgan Stanley over the same issue. The allegations sparked "fury" among some investors and led to the immediate filing of several lawsuits, one of them a class action suit claiming more than $2.5 billion in losses due to the IPO. Bloomberg estimated that retail investors may have lost approximately $630 million on Facebook stock since its debut. S&P Global Ratings added Facebook to its S&P 500 index on December 21, 2013. On May 2, 2014, Zuckerberg announced that the company would be changing its internal motto from "Move fast and break things" to "Move fast with stable infrastructure". The earlier motto had been described as Zuckerberg's "prime directive to his developers and team" in a 2009 interview in Business Insider, in which he also said, "Unless you are breaking stuff, you are not moving fast enough." In November 2016, Facebook announced the Microsoft Windows client of gaming service Facebook Gameroom, formerly Facebook Games Arcade, at the Unity Technologies developers conference. The client allows Facebook users to play "native" games in addition to its web games. The service was closed in June 2021. Lasso was a short-video sharing app from Facebook similar to TikTok that was launched on iOS and Android in 2018 and was aimed at teenagers. On July 2, 2020, Facebook announced that Lasso would be shutting down on July 10. In 2018, the Oculus lead Jason Rubin sent his 50-page vision document titled "The Metaverse" to Facebook's leadership. In the document, Rubin acknowledged that Facebook's virtual reality business had not caught on as expected, despite the hundreds of millions of dollars spent on content for early adopters. He also urged the company to execute fast and invest heavily in the vision, to shut out HTC, Apple, Google and other competitors in the VR space. Regarding other players' participation in the metaverse vision, he called for the company to build the "metaverse" to prevent their competitors from "being in the VR business in a meaningful way at all". In May 2019, Facebook founded Libra Networks, reportedly to develop their own stablecoin cryptocurrency. Later, it was reported that Libra was being supported by financial companies such as Visa, Mastercard, PayPal and Uber. The consortium of companies was expected to pool in $10 million each to fund the launch of the cryptocurrency coin named Libra. Depending on when it would receive approval from the Swiss Financial Market Supervisory authority to operate as a payments service, the Libra Association had planned to launch a limited format cryptocurrency in 2021. Libra was renamed Diem, before being shut down and sold in January 2022 after backlash from Swiss government regulators and the public. During the COVID-19 pandemic, the use of online services, including Facebook, grew globally. Zuckerberg predicted this would be a "permanent acceleration" that would continue after the pandemic. Facebook hired aggressively, growing from 48,268 employees in March 2020 to more than 87,000 by September 2022. Following a period of intense scrutiny and damaging whistleblower leaks, news started to emerge on October 21, 2021 about Facebook's plan to rebrand the company and change its name. In the Q3 2021 earnings call on October 25, Mark Zuckerberg discussed the ongoing criticism of the company's social services and the way it operates, and pointed to the pivoting efforts to building the metaverse – without mentioning the rebranding and the name change. The metaverse vision and the name change from Facebook, Inc. to Meta Platforms was introduced at Facebook Connect on October 28, 2021. Based on Facebook's PR campaign, the name change reflects the company's shifting long term focus of building the metaverse, a digital extension of the physical world by social media, virtual reality and augmented reality features. "Meta" had been registered as a trademark in the United States in 2018 (after an initial filing in 2015) for marketing, advertising, and computer services, by a Canadian company that provided big data analysis of scientific literature. This company was acquired in 2017 by the Chan Zuckerberg Initiative (CZI), a foundation established by Zuckerberg and his wife, Priscilla Chan, and became one of their projects. Following the rebranding announcement, CZI announced that it had already decided to deprioritize the earlier Meta project, thus it would be transferring its rights to the name to Meta Platforms, and the previous project would end in 2022. Soon after the rebranding, in early February 2022, Meta reported a greater-than-expected decline in profits in the fourth quarter of 2021. It reported no growth in monthly users, and indicated it expected revenue growth to stall. It also expected measures taken by Apple Inc. to protect user privacy to cost it some $10 billion in advertisement revenue, an amount equal to roughly 8% of its revenue for 2021. In meeting with Meta staff the day after earnings were reported, Zuckerberg blamed competition for user attention, particularly from video-based apps such as TikTok. The 27% reduction in the company's share price which occurred in reaction to the news eliminated some $230 billion of value from Meta's market capitalization. Bloomberg described the decline as "an epic rout that, in its sheer scale, is unlike anything Wall Street or Silicon Valley has ever seen". Zuckerberg's net worth fell by as much as $31 billion. Zuckerberg owns 13% of Meta, and the holding makes up the bulk of his wealth. According to published reports by Bloomberg on March 30, 2022, Meta turned over data such as phone numbers, physical addresses, and IP addresses to hackers posing as law enforcement officials using forged documents. The law enforcement requests sometimes included forged signatures of real or fictional officials. When asked about the allegations, a Meta representative said, "We review every data request for legal sufficiency and use advanced systems and processes to validate law enforcement requests and detect abuse." In June 2022, Sheryl Sandberg, the chief operating officer of 14 years, announced she would step down that year. Zuckerberg said that Javier Olivan would replace Sandberg, though in a “more traditional” role. In March 2022, Meta (except Meta-owned WhatsApp) and Instagram were banned in Russia and added to the Russian list of terrorist and extremist organizations for alleged Russophobia and hate speech (up to genocidal calls) amid the ongoing Russian invasion of Ukraine. Meta appealed against the ban, but it was upheld by a Moscow court in June of the same year. Also in March 2022, Meta and Italian eyewear giant Luxottica released Ray-Ban Stories, a series of smartglasses which could play music and take pictures. Meta and Luxottica parent company EssilorLuxottica declined to disclose sales on the line of products as of September 2022, though Meta has expressed satisfaction with its customer feedback. In July 2022, Meta saw its first year-on-year revenue decline when its total revenue slipped by 1% to $28.8bn. Analysts and journalists accredited the loss to its advertising business, which has been limited by Apple's app tracking transparency feature and the number of people who have opted not to be tracked by Meta apps. Zuckerberg also accredited the decline to increasing competition from TikTok. On October 27, 2022, Meta's market value dropped to $268 billion, a loss of around $700 billion compared to 2021, and its shares fell by 24%. It lost its spot among the top 20 US companies by market cap, despite reaching the top 5 in the previous year. In November 2022, Meta laid off 11,000 employees, 13% of its workforce. Zuckerberg said the decision to aggressively increase Meta's investments had been a mistake, as he had wrongly predicted that the surge in e-commerce would last beyond the COVID-19 pandemic. He also attributed the decline to increased competition, a global economic downturn and "ads signal loss". Plans to lay off a further 10,000 employees began in April 2023. The layoffs were part of a general downturn in the technology industry, alongside layoffs by companies including Google, Amazon, Tesla, Snap, Twitter and Lyft. Starting from 2022, Meta scrambled to catch up to other tech companies in adopting specialized artificial intelligence hardware and software. It had been using less expensive CPUs instead of GPUs for AI work, but that approach turned out to be less efficient. The company gifted the Inter-university Consortium for Political and Social Research $1.3 million to finance the Social Media Archive's aim to make their data available to social science research. In 2023, Ireland's Data Protection Commissioner imposed a record EUR 1.2 billion fine on Meta for transferring data from Europe to the United States without adequate protections for EU citizens.: 250 In March 2023, Meta announced a new round of layoffs that would cut 10,000 employees and close 5,000 open positions to make the company more efficient. Meta revenue surpassed analyst expectations for the first quarter of 2023 after announcing that it was increasing its focus on AI. On July 6, Meta launched a new app, Threads, a competitor to Twitter. Meta announced its artificial intelligence model Llama 2 in July 2023, available for commercial use via partnerships with major cloud providers like Microsoft. It was the first project to be unveiled out of Meta's generative AI group after it was set up in February. It would not charge access or usage but instead operate with an open-source model to allow Meta to ascertain what improvements need to be made. Prior to this announcement, Meta said it had no plans to release Llama 2 for commercial use. An earlier version of Llama was released to academics. In August 2023, Meta announced its permanent removal of news content from Facebook and Instagram in Canada due to the Online News Act, which requires Canadian news outlets to be compensated for content shared on its platform. The Online News Act was in effect by year-end, but Meta will not participate in the regulatory process. In October 2023, Zuckerberg said that AI would be Meta's biggest investment area in 2024. Meta finished 2023 as one of the best-performing technology stocks of the year, with its share price up 150 percent. Its stock reached an all-time high in January 2024, bringing Meta within 2% of achieving $1 trillion market capitalization. In November 2023 Meta Platforms launched an ad-free service in Europe, allowing subscribers to opt-out of personal data being collected for targeted advertising. A group of 28 European organizations, including Max Schrems' advocacy group NOYB, the Irish Council for Civil Liberties, Wikimedia Europe, and the Electronic Privacy Information Center, signed a 2024 letter to the European Data Protection Board (EDPB) expressing concern that this subscriber model would undermine privacy protections, specifically GDPR data protection standards. Meta removed the Facebook and Instagram accounts of Iran's Supreme Leader Ali Khamenei in February 2024, citing repeated violations of its Dangerous Organizations & Individuals policy. As of March, Meta was under investigation by the FDA for alleged use of their social media platforms to sell illegal drugs. On 16 May 2024, the European Commission began an investigation into Meta over concerns related to child safety. In May 2023, Iraqi social media influencer Esaa Ahmed-Adnan encountered a troubling issue when Instagram removed his posts, citing false copyright violations despite his content being original and free from copyrighted material. He discovered that extortionists were behind these takedowns, offering to restore his content for $3,000 or provide ongoing protection for $1,000 per month. This scam, exploiting Meta’s rights management tools, became widespread in the Middle East, revealing a gap in Meta’s enforcement in developing regions. An Iraqi nonprofit Tech4Peace’s founder, Aws al-Saadi helped Ahmed-Adnan and others, but the restoration process was slow, leading to significant financial losses for many victims, including prominent figures like Ammar al-Hakim. This situation highlighted Meta’s challenges in balancing global growth with effective content moderation and protection. On 16 September 2024, Meta announced it had banned Russian state media outlets from its platforms worldwide due to concerns about "foreign interference activity." This decision followed allegations that RT and its employees funneled $10 million through shell companies to secretly fund influence campaigns on various social media channels. Meta's actions were part of a broader effort to counter Russian covert influence operations, which had intensified since the invasion. At its 2024 Connect conference, Meta presented Orion, its first pair of augmented reality glasses. Though Orion was originally intended to be sold to consumers, the manufacturing process turned out to be too complex and expensive. Instead, the company pivoted to producing a small number of the glasses to be used internally. On 4 October 2024, Meta announced about its new AI model called Movie Gen, capable of generating realistic video and audio clips based on user prompts. Meta stated it would not release Movie Gen for open development, preferring to collaborate directly with content creators and integrate it into its products by the following year. The model was built using a combination of licensed and publicly available datasets. On October 31, 2024, ProPublica published an investigation into deceptive political advertisement scams that sometimes use hundreds of hijacked profiles and facebook pages run by organized networks of scammers. The authors cited spotty enforcement by Meta as a major reason for the extent of the issue. In November 2024, TechCrunch reported that Meta were considering building a $10bn global underwater cable spanning 25,000 miles. In the same month, Meta closed down 2 million accounts on Facebook and Instagram that were linked to scam centers in Myanmar, Laos, Cambodia, the Philippines, and the United Arab Emirates doing pig butchering scams. In December 2024, Meta announced that, beginning February 2025, they would require advertisers to run ads about financial services in Australia to verify information about who are the beneficiary and the payer in a bid to regulate scams. On December 4, 2024, Meta announced it will invest US$10 billion for its largest AI data center in northeast Louisiana, powered by natural gas facilities. On the 11th of that month, Meta experienced a global outage, impacting accounts on all of their social media and messaging applications. Outage reports from DownDetector reached 70,000+ and 100,000+ within minutes for Instagram and Facebook, respectively. In January 2025, Meta announced plans to roll back its diversity, equity, and inclusion (DEI) initiatives, citing shifts in the "legal and policy landscape" in the United States following the 2024 presidential election. The decision followed reports that CEO Mark Zuckerberg sought to align the company more closely with the incoming Trump administration, including changes to content moderation policies and executive leadership. The new content moderation policies continued to bar insults about a person's intellect or mental illness, but made an exception to allow calling LGBTQ people mentally ill because they are gay or transgender. Later that month, Meta agreed to pay $25 million to settle a 2021 lawsuit brought by Donald Trump for suspending his social media accounts after the January 6 riots. Changes to Meta's moderation policies were controversial among its oversight board, with a significant divide in opinion between the board's US conservatives and its global members. In June 2025, Meta Platforms Inc. has decided to make a multibillion-dollar investment into artificial intelligence startup Scale AI. The financing could exceed $10 billion in value which would make it one of the largest private company funding events of all time. In October 2025, it was announced that Meta would be laying off 600 employees in the artificial intelligence unit to perform better and simpler. They referred to their AI unit as "bloated" and are seeking to trim down the department. This mass layoff is going to impact Meta’s AI infrastructure units, Fundamental Artificial Intelligence Research unit (FAIR) and other product-related positions. Mergers and acquisitions Meta has acquired multiple companies (often identified as talent acquisitions). One of its first major acquisitions was in April 2012, when it acquired Instagram for approximately US$1 billion in cash and stock. In October 2013, Facebook, Inc. acquired Onavo, an Israeli mobile web analytics company. In February 2014, Facebook, Inc. announced it would buy mobile messaging company WhatsApp for US$19 billion in cash and stock. The acquisition was completed on October 6. Later that year, Facebook bought Oculus VR for $2.3 billion in cash and stock, which released its first consumer virtual reality headset in 2016. In late November 2019, Facebook, Inc. announced the acquisition of the game developer Beat Games, responsible for developing one of that year's most popular VR games, Beat Saber. In Late 2022, after Facebook Inc rebranded to Meta Platforms Inc, Oculus was rebranded to Meta Quest. In May 2020, Facebook, Inc. announced it had acquired Giphy for a reported cash price of $400 million. It will be integrated with the Instagram team. However, in August 2021, UK's Competition and Markets Authority (CMA) stated that Facebook, Inc. might have to sell Giphy, after an investigation found that the deal between the two companies would harm competition in display advertising market. Facebook, Inc. was fined $70 million by CMA for deliberately failing to report all information regarding the acquisition and the ongoing antitrust investigation. In October 2022, the CMA ruled for a second time that Meta be required to divest Giphy, stating that Meta already controls half of the advertising in the UK. Meta agreed to the sale, though it stated that it disagrees with the decision itself. In May 2023, Giphy was divested to Shutterstock for $53 million. In November 2020, Facebook, Inc. announced that it planned to purchase the customer-service platform and chatbot specialist startup Kustomer to promote companies to use their platform for business. It has been reported that Kustomer valued at slightly over $1 billion. The deal was closed in February 2022 after regulatory approval. In September 2022, Meta acquired Lofelt, a Berlin-based haptic tech startup. In December 2025, it was announced Meta had acquired the AI-wearables startup, Limitless. In the same month, they also acquired another AI startup, Manus AI, for $2 billion. Manus announced in December that its platform had achieved $100mm in recurring revenue just 8 months after its launch and Meta said it will scale the platform to many other businesses. In January 2026, it was announced Meta proposed acquisition of Manus was undergoing preliminary scrutiny by Chinese regulators. The examination concerns the cross-border transfer of artificial intelligence technology developed in China. Lobbying In 2020, Facebook, Inc. spent $19.7 million on lobbying, hiring 79 lobbyists. In 2019, it had spent $16.7 million on lobbying and had a team of 71 lobbyists, up from $12.6 million and 51 lobbyists in 2018. Facebook was the largest spender of lobbying money among the Big Tech companies in 2020. The lobbying team includes top congressional aide John Branscome, who was hired in September 2021, to help the company fend off threats from Democratic lawmakers and the Biden administration. In December 2024, Meta donated $1 million to the inauguration fund for then-President-elect Donald Trump. In 2025, Meta was listed among the donors funding the construction of the White House State Ballroom. Partnerships February 2026, Meta announced a long-term partnership with Nvidia. Censorship In August 2024, Mark Zuckerberg sent a letter to Jim Jordan indicating that during the COVID-19 pandemic the Biden administration repeatedly asked Meta to limit certain COVID-19 content, including humor and satire, on Facebook and Instagram. In 2016 Meta hired Jordana Cutler, formerly an employee at the Israeli Embassy to the United States, as its policy chief for Israel and the Jewish Diaspora. In this role, Cutler pushed for the censorship of accounts belonging to Students for Justice in Palestine chapters in the United States. Critics have said that Cutler's position gives the Israeli government an undue influence over Meta policy, and that few countries have such high levels of contact with Meta policymakers. Following the election of Donald Trump in 2025, various sources noted possible censorship related to the Democratic Party on Instagram and other Meta platforms. In February 2025, a Meta rep flagged journalist Gil Duran's article and other "critiques of tech industry figures" as spam or sensitive content, limiting their reach. In March 2025, Meta attempted to block former employee Sarah Wynn-Williams from promoting or further distributing her memoir, Careless People, that includes allegations of unaddressed sexual harassment in the workplace by senior executives. The New York Times reports that the arbitration is among Meta's most forcible attempts to repudiate a former employee's account of workplace dynamics. Publisher Macmillan reacted to the ruling by the Emergency International Arbitral Tribunal by stating that it will ignore its provisions. As of 15 March 2025[update], hardback and digital versions of Careless People were being offered for sale by major online retailers. From October 2025, Meta began removing and restricting access for accounts related to LGBTQ, reproductive health and abortion information pages on its platforms. Martha Dimitratou, executive director of Repro Uncensored, called Meta's shadow-banning of these issues "One of the biggest waves of censorship we are seeing". Disinformation concerns Since its inception, Meta has been accused of being a host for fake news and misinformation. In the wake of the 2016 United States presidential election, Zuckerberg began to take steps to eliminate the prevalence of fake news, as the platform had been criticized for its potential influence on the outcome of the election. The company initially partnered with ABC News, the Associated Press, FactCheck.org, Snopes and PolitiFact for its fact-checking initiative; as of 2018, it had over 40 fact-checking partners across the world, including The Weekly Standard. A May 2017 review by The Guardian found that the platform's fact-checking initiatives of partnering with third-party fact-checkers and publicly flagging fake news were regularly ineffective, and appeared to be having minimal impact in some cases. In 2018, journalists working as fact-checkers for the company criticized the partnership, stating that it had produced minimal results and that the company had ignored their concerns. In 2024 Meta's decision to continue to disseminate a falsified video of US president Joe Biden, even after it had been proven to be fake, attracted criticism and concern. In January 2025, Meta ended its use of third-party fact-checkers in favor of a user-run community notes system similar to the one used on X. While Zuckerberg supported these changes, saying that the amount of censorship on the platform was excessive, the decision received criticism by fact-checking institutions, stating that the changes would make it more difficult for users to identify misinformation. Meta also faced criticism for weakening its policies on hate speech that were designed to protect minorities and LGBTQ+ individuals from bullying and discrimination. While moving its content review teams from California to Texas, Meta changed their hateful conduct policy to eliminate restrictions on anti-LGBT and anti-immigrant hate speech, as well as explicitly allowing users to accuse LGBT people of being mentally ill or abnormal based on their sexual orientation or gender identity. In January 2025, Meta faced significant criticism for its role in removing LGBTQ+ content from its platforms, amid its broader efforts to address anti-LGBTQ+ hate speech. The removal of LGBTQ+ themes was noted as part of the wider crackdown on content deemed to violate its community guidelines. Meta's content moderation policies, which were designed to combat harmful speech and protect users from discrimination, inadvertently led to the removal or restriction of LGBTQ+ content, particularly posts highlighting LGBTQ+ identities, support, or political issues. According to reports, LGBTQ+ posts, including those that simply celebrated pride or advocated for LGBTQ+ rights, were flagged and removed for reasons that some critics argue were vague or inconsistently applied. Many LGBTQ+ activists and users on Meta's platforms expressed concern that such actions stifled visibility and expression, potentially isolating LGBTQ+ individuals and communities, especially in spaces that were historically important for outreach and support. Lawsuits Numerous lawsuits have been filed against the company, both when it was known as Facebook, Inc., and as Meta Platforms. In March 2020, the Office of the Australian Information Commissioner (OAIC) sued Facebook, for significant and persistent infringements of the rule on privacy involving the Cambridge Analytica fiasco. Every violation of the Privacy Act is subject to a theoretical cumulative liability of $1.7 million. The OAIC estimated that a total of 311,127 Australians had been exposed. On December 8, 2020, the U.S. Federal Trade Commission and 46 states (excluding Alabama, Georgia, South Carolina, and South Dakota), the District of Columbia and the territory of Guam, launched Federal Trade Commission v. Facebook as an antitrust lawsuit against Facebook. The lawsuit concerns Facebook's acquisition of two competitors—Instagram and WhatsApp—and the ensuing monopolistic situation. FTC alleges that Facebook holds monopolistic power in the U.S. social networking market and seeks to force the company to divest from Instagram and WhatsApp to break up the conglomerate. William Kovacic, a former chairman of the Federal Trade Commission, argued the case will be difficult to win as it would require the government to create a counterfactual argument of an internet where the Facebook-WhatsApp-Instagram entity did not exist, and prove that harmed competition or consumers. In November 2025, it was ruled that Meta did not violate antitrust laws and holds no monopoly in the market. On December 24, 2021, a court in Russia fined Meta for $27 million after the company declined to remove unspecified banned content. The fine was reportedly tied to the company's annual revenue in the country. In May 2022, a lawsuit was filed in Kenya against Meta and its local outsourcing company Sama. Allegedly, Meta has poor working conditions in Kenya for workers moderating Facebook posts. According to the lawsuit, 260 screeners were declared redundant with confusing reasoning. The lawsuit seeks financial compensation and an order that outsourced moderators be given the same health benefits and pay scale as Meta employees. In June 2022, 8 lawsuits were filed across the U.S. over the allege that excessive exposure to platforms including Facebook and Instagram has led to attempted or actual suicides, eating disorders and sleeplessness, among other issues. The litigation follows a former Facebook employee's testimony in Congress that the company refused to take responsibility. The company noted that tools have been developed for parents to keep track of their children's activity on Instagram and set time limits, in addition to Meta's "Take a break" reminders. In addition, the company is providing resources specific to eating disorders as well as developing AI to prevent children under the age of 13 signing up for Facebook or Instagram. In June 2022, Meta settled a lawsuit with the US Department of Justice. The lawsuit, which was filed in 2019, alleged that the company enabled housing discrimination through targeted advertising, as it allowed homeowners and landlords to run housing ads excluding people based on sex, race, religion, and other characteristics. The U.S. Department of Justice stated that this was in violation of the Fair Housing Act. Meta was handed a penalty of $115,054 and given until December 31, 2022, to shadow the algorithm tool. In January 2023, Meta was fined €390 million for violations of the European Union General Data Protection Regulation. In May 2023, the European Data Protection Board fined Meta a record €1.2 billion for breaching European Union data privacy laws by transferring personal data of Facebook users to servers in the U.S. In July 2024, Meta agreed to pay the state of Texas US$1.4 billion to settle a lawsuit brought by Texas Attorney General Ken Paxton accusing the company of collecting users' biometric data without consent, setting a record for the largest privacy-related settlement ever obtained by a state attorney general. In October 2024, Meta Platforms faced lawsuits in Japan from 30 plaintiffs who claimed they were defrauded by fake investment ads on Facebook and Instagram, featuring false celebrity endorsements. The plaintiffs are seeking approximately $2.8 million in damages. In April 2025, the Kenyan High Court ruled that a US$2.4 billion lawsuit in which three plaintiffs claim that Facebook inflamed civil violence in Ethiopia in 2021 could proceed. In April 2025, Meta was fined €200 million ($230 million) for breaking the Digital Markets Act, by imposing a “consent or pay” system that forces users to either allow their personal data to be used to target advertisements, or pay a subscription fee for advertising-free versions of Facebook and Instagram. In late April 2025, a case was filed against Meta in Ghana over the alleged psychological distress experienced by content moderators employed to take down disturbing social media content including depictions of murders, extreme violence and child sexual abuse. Meta moved the moderation service to the Ghanaian capital of Accra after legal issues in the previous location Kenya. The new moderation company is Teleperformance, a multinational corporation with a history of worker's rights violation. Reports suggests the conditions are worse here than in the previous Kenyan location, with many workers afraid of speaking out due to fear of returning to conflict zones. Workers reported developing mental illnesses, attempted suicides, and low pay. In 26 January 2026, a New Mexico state court case was filed, suggesting that Mark Zuckerberg approved allowing minors to access artificial intelligence chatbot companions that safety staffers warned were capable of sexual interactions. In 2020, the company UReputation, which had been involved in several cases concerning the management of digital armies[clarification needed], filed a lawsuit against Facebook, accusing it of unlawfully transmitting personal data to third parties. Legal actions were initiated in Tunisia, France, and the United States. In 2025, the United States District court for the Northern District of Georgia approved a discovery procedure, allowing UReputation to access documents and evidence held by Meta. Structure Meta's key management consists of: As of October 2022[update], Meta had 83,553 employees worldwide. As of June 2024[update], Meta's board consisted of the following directors; Meta Platforms is mainly owned by institutional investors, who hold around 80% of all shares. Insiders control the majority of voting shares. The three largest individual investors in 2024 were Mark Zuckerberg, Sheryl Sandberg and Christopher K. Cox. The largest shareholders in late 2024/early 2025 were: Roger McNamee, an early Facebook investor and Zuckerberg's former mentor, said Facebook had "the most centralized decision-making structure I have ever encountered in a large company". Facebook co-founder Chris Hughes has stated that chief executive officer Mark Zuckerberg has too much power, that the company is now a monopoly, and that, as a result, it should be split into multiple smaller companies. In an op-ed in The New York Times, Hughes said he was concerned that Zuckerberg had surrounded himself with a team that did not challenge him, and that it is the U.S. government's job to hold him accountable and curb his "unchecked power". He also said that "Mark's power is unprecedented and un-American." Several U.S. politicians agreed with Hughes. European Union Commissioner for Competition Margrethe Vestager stated that splitting Facebook should be done only as "a remedy of the very last resort", and that it would not solve Facebook's underlying problems. Revenue Facebook ranked No. 34 in the 2020 Fortune 500 list of the largest United States corporations by revenue, with almost $86 billion in revenue most of it coming from advertising. One analysis of 2017 data determined that the company earned US$20.21 per user from advertising. According to New York, since its rebranding, Meta has reportedly lost $500 billion as a result of new privacy measures put in place by companies such as Apple and Google which prevents Meta from gathering users' data. In February 2015, Facebook announced it had reached two million active advertisers, with most of the gain coming from small businesses. An active advertiser was defined as an entity that had advertised on the Facebook platform in the last 28 days. In March 2016, Facebook announced it had reached three million active advertisers with more than 70% from outside the United States. Prices for advertising follow a variable pricing model based on auctioning ad placements, and potential engagement levels of the advertisement itself. Similar to other online advertising platforms like Google and Twitter, targeting of advertisements is one of the chief merits of digital advertising compared to traditional media. Marketing on Meta is employed through two methods based on the viewing habits, likes and shares, and purchasing data of the audience, namely targeted audiences and "look alike" audiences. The U.S. IRS challenged the valuation Facebook used when it transferred IP from the U.S. to Facebook Ireland (now Meta Platforms Ireland) in 2010 (which Facebook Ireland then revalued higher before charging out), as it was building its double Irish tax structure. The case is ongoing and Meta faces a potential fine of $3–5bn. The U.S. Tax Cuts and Jobs Act of 2017 changed Facebook's global tax calculations. Meta Platforms Ireland is subject to the U.S. GILTI tax of 10.5% on global intangible profits (i.e. Irish profits). On the basis that Meta Platforms Ireland Limited is paying some tax, the effective minimum US tax for Facebook Ireland will be circa 11%. In contrast, Meta Platforms Inc. would incur a special IP tax rate of 13.125% (the FDII rate) if its Irish business relocated to the U.S. Tax relief in the U.S. (21% vs. Irish at the GILTI rate) and accelerated capital expensing, would make this effective U.S. rate around 12%. The insignificance of the U.S./Irish tax difference was demonstrated when Facebook moved 1.5bn non-EU accounts to the U.S. to limit exposure to GDPR. Facilities Users outside of the U.S. and Canada contract with Meta's Irish subsidiary, Meta Platforms Ireland Limited (formerly Facebook Ireland Limited), allowing Meta to avoid US taxes for all users in Europe, Asia, Australia, Africa and South America. Meta is making use of the Double Irish arrangement which allows it to pay 2–3% corporation tax on all international revenue. In 2010, Facebook opened its fourth office, in Hyderabad, India, which houses online advertising and developer support teams and provides support to users and advertisers. In India, Meta is registered as Facebook India Online Services Pvt Ltd. It also has offices or planned sites in Chittagong, Bangladesh; Dublin, Ireland; and Austin, Texas, among other cities. Facebook opened its London headquarters in 2017 in Fitzrovia in central London. Facebook opened an office in Cambridge, Massachusetts in 2018. The offices were initially home to the "Connectivity Lab", a group focused on bringing Internet access to those who do not have access to the Internet. In April 2019, Facebook opened its Taiwan headquarters in Taipei. In March 2022, Meta opened new regional headquarters in Dubai. In September 2023, it was reported that Meta had paid £149m to British Land to break the lease on Triton Square London office. Meta reportedly had another 18 years left on its lease on the site. As of 2023, Facebook operated 21 data centers. It committed to purchase 100% renewable energy and to reduce its greenhouse gas emissions 75% by 2020. Its data center technologies include Fabric Aggregator, a distributed network system that accommodates larger regions and varied traffic patterns. Reception US Representative Alexandria Ocasio-Cortez responded in a tweet to Zuckerberg's announcement about Meta, saying: "Meta as in 'we are a cancer to democracy metastasizing into a global surveillance and propaganda machine for boosting authoritarian regimes and destroying civil society ... for profit!'" Ex-Facebook employee Frances Haugen and whistleblower behind the Facebook Papers responded to the rebranding efforts by expressing doubts about the company's ability to improve while led by Mark Zuckerberg, and urged the chief executive officer to resign. In November 2021, a video published by Inspired by Iceland went viral, in which a Zuckerberg look-alike promoted the Icelandverse, a place of "enhanced actual reality without silly looking headsets". In a December 2021 interview, SpaceX and Tesla chief executive officer Elon Musk said he could not see a compelling use-case for the VR-driven metaverse, adding: "I don't see someone strapping a frigging screen to their face all day." In January 2022, Louise Eccles of The Sunday Times logged into the metaverse with the intention of making a video guide. She wrote: Initially, my experience with the Oculus went well. I attended work meetings as an avatar and tried an exercise class set in the streets of Paris. The headset enabled me to feel the thrill of carving down mountains on a snowboard and the adrenaline rush of climbing a mountain without ropes. Yet switching to the social apps, where you mingle with strangers also using VR headsets, it was at times predatory and vile. Eccles described being sexually harassed by another user, as well as "accents from all over the world, American, Indian, English, Australian, using racist, sexist, homophobic and transphobic language". She also encountered users as young as 7 years old on the platform, despite Oculus headsets being intended for users over 13. See also References External links 37°29′06″N 122°08′54″W / 37.48500°N 122.14833°W / 37.48500; -122.14833
========================================
[SOURCE: https://en.wikipedia.org/wiki/Diplomatic_mission] | [TOKENS: 1989]
Contents Diplomatic mission A diplomatic mission or foreign mission is a group of people from a state or organization present in another state to represent the sending state or organization officially in the receiving or host state. In practice, the phrase usually denotes an embassy or high commission, which is the main office of a country's diplomatic representatives to another country; it is usually, but not necessarily, based in the receiving state's capital city. Consulates, on the other hand, are smaller diplomatic missions that are normally located in major cities of the receiving state (but can be located in the capital, typically when the sending country has no embassy in the receiving state). In addition to being a diplomatic mission to the country in which it is located, an embassy may also be a non-resident permanent mission to one or more other countries. The term embassy is sometimes used interchangeably with chancery, the physical office or site of a diplomatic mission. Consequently, the terms "embassy residence" and "embassy office" are used to distinguish between the ambassador's residence and the chancery. Terminology A country may have several different types of diplomatic missions in another country. The head of an embassy is known as an ambassador or high commissioner. The term embassy is commonly used also as a section of a building in which the work of the diplomatic mission is carried out, but strictly speaking, it is the diplomatic delegation itself that is the embassy, while the office space and the diplomatic work done is called the chancery. Therefore, the embassy operates in the chancery. The members of a diplomatic mission can reside within or outside the building that holds the mission's chancery, and their private residences enjoy the same rights as the premises of the mission as regards inviolability and protection. All missions to the United Nations are known simply as permanent missions, while EU member states' missions to the European Union are known as permanent representations, and the head of such a mission is typically both a permanent representative and an ambassador. European Union missions abroad are known as EU delegations. Some countries have a more particular nomenclature for their missions and staff: a Vatican mission is headed by a nuncio (Latin for "envoy") and consequently known as an apostolic nunciature. Under the rule of Muammar Gaddafi, Libya's missions used the name people's bureau, headed by a secretary. Missions between Commonwealth countries are known as high commissions, and their heads are high commissioners. Generally speaking, ambassadors and high commissioners are regarded as equivalent in status and function, and embassies and high commissions are both deemed to be diplomatic missions. In the past, a diplomatic mission headed by a lower-ranking official (an envoy or minister resident) was known as a legation. Since the ranks of envoy and minister resident are effectively obsolete, the designation of legation is no longer among the diplomatic ranks used in diplomacy and international relations. A consulate is similar to, but not the same as, a diplomatic office, with a focus on dealing with individual persons and businesses, as defined by the Vienna Convention on Consular Relations. A consulate or consulate general is generally a representative of the embassy in locales outside of the capital city. For instance, the Philippines has its embassy to the United States in the latter's capital, Washington, D.C., but also maintains seven consulates-general in major US cities. The person in charge of a consulate or consulate-general is known as a consul or consul-general, respectively. Similar services may also be provided at the embassy (to serve the region of the capital) in what is normally called a consular section. In cases of dispute, it is common for a country to recall its head of mission as a sign of its displeasure. This is less drastic than cutting diplomatic relations completely, and the mission will continue operating more or less normally, but it will now be headed by a chargé d'affaires (usually the deputy chief of mission) who may have limited powers. A chargé d'affaires ad interim also heads the mission during the interim between the end of one chief of mission's term and the beginning of another. Extraterritoriality Contrary to popular belief, diplomatic missions sometimes do not enjoy full extraterritorial status and are generally not sovereign territory of the represented state. The sending state can give embassies sovereign status, but this only happens with a minority of countries. Rather, the premises of an embassy remain under the jurisdiction of the host state while being afforded special privileges (such as immunity from most local laws) by the Vienna Convention on Diplomatic Relations. Diplomats themselves still retain full diplomatic immunity, and (as an adherent to the Vienna Convention) the authorities of the host country may not enter the premises of the mission (which means the head of mission's residence) without permission of the represented country. However consent may be assumed in cases of fire or other disaster requiring prompt protective action. International rules designate an attack on an embassy as an attack on the country it represents. The term 'extraterritoriality' is often applied to diplomatic missions, but normally only in this broader sense. As the host country's authorities may not enter the representing country's embassy without permission, embassies are sometimes used by refugees escaping from either the host country or a third country. For example, North Korean nationals, who would be arrested and deported from China upon discovery, have sought sanctuary at various third-country embassies in China. Once inside the embassy, diplomatic channels can be used to solve the issue and send the refugees to another country. See the list of people who took refuge in a diplomatic mission for a list of some notable cases. Notable violations of embassy extraterritoriality include repeated invasions of the British Embassy in Beijing (1967), the hostage crisis at the American embassy in Tehran, Iran (1979–1981), and the hostage crisis at the Japanese ambassador's residence in Lima, Peru (1996–1997). Role The basic role of a diplomatic mission is to represent and safeguard the interests of the home country and its citizens in the host country. According to the 1961 Vienna Convention on Diplomatic Relations, which establishes the framework of diplomacy among sovereign states: The functions of a diplomatic mission consist, inter alia, in representing the sending State in the receiving State; protecting in the receiving State the interests of the sending State and of its nationals, within the limits permitted by international law; negotiating with the Government of the receiving State; ascertaining by all lawful means conditions and developments in the receiving State, and reporting thereon to the Government of the sending State; promoting friendly relations between the sending State and the receiving State, and developing their economic, cultural and scientific relations. Diplomatic missions between members of the Commonwealth of Nations are not called embassies, but high commissions, for Commonwealth nations share a special diplomatic relationship. It is generally expected that an embassy of a Commonwealth country in a non-Commonwealth country will do its best to provide diplomatic services to citizens from other Commonwealth countries if the citizens' country does not have an embassy in that country. Canadian and Australian nationals enjoy even greater cooperation between their respective consular services, as outlined in the Canada-Australia Consular Services Sharing Agreement. The same kind of procedure is also followed multilaterally by the member states of the European Union (EU). European citizens in need of consular help in a country without diplomatic or consular representation of their own country may turn to any consular or diplomatic mission of another EU member state (art. 23 TFEU). Multiple missions in a city Some cities may host more than one mission from the same country. In Rome, many states maintain separate missions to both Italy and the Holy See. It is not customary for these missions to share premises or personnel. At present, only the Iraqi and United States embassies to Italy and the Holy See share premises; however, separate ambassadors are appointed, one to each country. In the case of the UN's Food Agencies, the sending country's ambassador to the Italian Republic is usually accredited as permanent representative. The United States maintains a separate mission to the UN agencies, led by its own ambassador, but is located in the compound that houses its embassies to Italy and the Holy See. Several cities host both embassies/consulates and permanent representatives to international organizations, such as New York City (United Nations), Washington, D.C. (Organization of American States), Jakarta (ASEAN) and Brussels (European Union and North Atlantic Treaty Organization). In some cases, an embassy or consulate is divided between multiple locations in the same city. For example, the Bangladeshi Deputy High Commission in Kolkata has two locations: one at Park Circus and another, opened later, at Mirza Ghalib Street, to reduce overcrowding. Non-diplomatic offices Governments of states not recognized by the receiving state and of territories that make no claim to be sovereign states may set up offices abroad that do not have official diplomatic status as defined by the Vienna Convention. Examples are the Taipei Economic and Cultural Representative Offices that represent the government of the Republic of China; Somaliland's Representative Offices in London, Addis Ababa, Rome, Taipei, and Washington, D.C.; the Hong Kong and Macau economic and trade offices that represent the governments of those two territories. Such offices assume some of the non-diplomatic functions of diplomatic posts, such as promoting trade interests and providing assistance to their citizens and residents. They are nevertheless not diplomatic missions; their personnel are not diplomats and do not have diplomatic visas, although there may be legislation providing for personal immunities and tax privileges, as in the case of the Hong Kong offices in London and Toronto or the Macau office in Lisbon, for example. Gallery See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Speciesism] | [TOKENS: 7304]
Contents Speciesism Speciesism (/ˈspiːʃiːˌzɪzəm, -siːˌzɪz-/) is a term used in philosophy regarding the treatment of individuals of different species. The term has several different definitions. Some specifically define speciesism as discrimination or unjustified treatment based on an individual's species membership, while others define it as differential treatment without regard to whether the treatment is justified or not. Richard D. Ryder, who coined the term, defined it as "a prejudice or attitude of bias in favour of the interests of members of one's own species and against those of members of other species". Speciesism results in the belief that humans have the right to use non-human animals in exploitative ways which is pervasive in the modern society. Studies from 2015 and 2019 suggest that people who support animal exploitation also tend to have intersectional bias that encapsulates and endorses racist, sexist, and other prejudicial views, which furthers the beliefs in human supremacy and group dominance to justify systems of inequality and oppression. As a term, "speciesism" first appeared during a protest against animal experimentation in 1970. Philosophers and animal rights advocates state that speciesism plays a role in the animal–industrial complex, including in the practice of factory farming, animal slaughter, blood sports (such as bullfighting, cockfighting and rodeos), the taking of animals' fur and skin, and experimentation on animals, as well as the refusal to help animals suffering in the wild due to natural processes, and the categorization of certain animals as alien, non-naturalized, feral and invasive giving then the justification to their killing or culling based on these classifications. Notable proponents of the concept include Peter Singer, Oscar Horta, Steven M. Wise, Gary L. Francione, Melanie Joy, David Nibert, Steven Best, and Ingrid Newkirk. Among academics, the ethics, morality, and concept of speciesism has been the subject of substantial philosophical debate. Carl Cohen, Nel Noddings, Bernard Williams, Peter Staudenmaier, Christopher Grau, Douglas Maclean, Roger Scruton, Thomas Wells, and Robert Nozick have criticized the term or elements of it. History Buffon, a French naturalist, writing in Histoire Naturelle in 1753, questioned whether it could be doubted that animals "whose organization is similar to ours, must experience similar sensations", and that "those sensations must be proportioned to the activity and perfection of their senses". Despite these assertions, he also maintained that there exists a gap between humans and other animals. In the poem "Poème sur le désastre de Lisbonne", Voltaire described a kinship between sentient beings, humans and animals alike, writing: "All sentient things, born by the same stern law, / Suffer like me, and like me also die." Jeremy Bentham has been identified as an early Western philosopher to advocate for animals' equal consideration within a comprehensive, secular moral framework. He argued that species membership is morally irrelevant and that any being capable of suffering has intrinsic value. In his 1789 book An Introduction to the Principles of Morals and Legislation, he wrote: The day may come, when the rest of the animal creation may acquire those rights which never could have been withheld from them but by the hand of tyranny. ... [T]he question is not, Can they reason? nor, Can they talk? but, Can they suffer? Bentham also supported animal welfare laws. At the same time, he accepted the killing and use of animals, provided that what he regarded as unnecessary cruelty was avoided. In his 1824 work Moral Inquiries on the Situation of Man and of Brutes, English writer and early animal rights advocate Lewis Gompertz argued for egalitarianism, extending it to nonhuman animals. He stated that humans and animals have highly similar feelings and sensations, noting that experiences such as hunger, desire, fear and anger affect both in similar ways. Gompertz also pointed to shared physiological characteristics between humans and animals, suggesting a similarity in sensation.: 41–42 He criticized human use of animals, drawing attention to what he saw as a disregard for their feelings, needs and desires.: 27 English naturalist Charles Darwin, writing in his notebook in 1838, observed that humans tend to regard themselves as masterpieces produced by a deity, but recorded his own view that it was "truer to consider him created from animals". In his 1871 book The Descent of Man, Darwin argued: There is no fundamental difference between man and the higher mammals in their mental faculties ... [t]he difference in mind between man and the higher animals, great as it is, certainly is one of degree and not of kind. We have seen that the senses and intuitions, the various emotions and faculties, such as love, memory, attention, curiosity, imitation, reason, etc., of which man boasts, may be found in an incipient, or even sometimes in a well-developed condition, in the lower animals. In 1843 Lewis H. Morgan published "Mind or Instinct: An Inquiry Concerning the Manifestation of Mind by the Lower Orders of Animals" in The Knickerbocker, where he used anecdotes such as dogs returning to surgeons, beavers building dams, ants storing grain and marmots posting lookouts to argue that animals display memory, foresight and reasoning. He rejected appeals to "instinct" as an explanation, suggesting instead that humans and other species share a common mental principle differing only in degree, and he questioned claims of human moral superiority while criticizing practices such as hunting for sport and killing animals for food. He developed these arguments in 1857 in an unpublished paper, "Animal Psychology", read to the Pundit Club in Rochester, New York, which again rejected instinct and attributed animal behavior to perception, memory, reflection, volition and reason. Morgan also speculated that animals might possess moral capacities and immortal souls, and he placed species on a "scale of gradation" of intelligence while remaining a creationist. Although little noticed at the time, the essay has been described in later scholarship as an unusually early critique of instinct within American comparative psychology. German philosopher Arthur Schopenhauer criticized anthropocentrism as, in his view, a fundamental defect of Christianity and Judaism. He argued that these religions contributed to the suffering of sentient beings by separating humans from other animals and encouraging their treatment as mere things. By contrast, Schopenhauer praised Brahmanism and Buddhism for their focus on kinship between humans and other animals and for their teaching about a connection between them through metempsychosis. According to historian Chien-Hui Li, some secularist thinkers in the late 19th and early 20th centuries argued for animals on utilitarian grounds and on the basis of evolutionary kinship, linking their views to a broader critique of Christian doctrines about suffering and social order. These secularists sought a morality independent of religious authority. Some initially supported vivisection for human benefit but later questioned its necessity. Figures such as G. W. Foote argued for broader utility, focusing on long-term moral principles rather than immediate gains. Drawing on evolutionary theories, they described common origins and similarities between humans and animals and argued that morality should extend to animals as beings capable of experiencing pain and pleasure. They rejected the idea of a theological gulf separating humans from animals and used contemporary scientific theories to support various proposals for animal rights and welfare. British writer and animal rights advocate Henry S. Salt, in his 1892 book Animals' Rights, argued that for humans to do justice to other animals they must look beyond the conception of a "great gulf" between them, claiming instead that people should recognize the "common bond of humanity that unites all living beings in one universal brotherhood". Edward Payson Evans, an American scholar and animal rights advocate, criticized anthropocentric psychology and ethics in his 1897 work Evolutional Ethics and Animal Psychology. He argued that such views treat humans as fundamentally different from other sentient beings, and he denied that this distinction removes all moral obligations toward animals.: 83 Evans held that Darwin's theory of evolution implies moral duties not only toward enslaved humans but also toward nonhuman animals. He asserted that beyond kind treatment, animals need enforceable rights to protect them from cruelty.: 14 Evans contended that recognizing kinship between humans and other sentient beings would make it impossible, in his view, to mistreat them.: 135 An 1898 article in The Zoophilist, titled "Anthropocentric Ethics", argued that some early civilizations, prior to Christianity, regarded tenderness and mercy toward sentient beings as a moral requirement. It discussed Zarathustra, Buddha and early Greek philosophers, who practiced vegetarianism, as exemplifying this outlook. The article claimed that this understanding of human–animal kinship persisted into early Christianity but was challenged by figures such as Origen, who saw animals as mere automata for human use. It concluded that the relationship between animal psychology and evolutionary ethics was gaining scientific and moral attention and could no longer be ignored. In 1895, American zoologist, philosopher and animal rights advocate J. Howard Moore described vegetarianism as the ethical result of recognizing the evolutionary kinship of all creatures, connecting his position with Darwin's insights. He criticized what he called the "pre-Darwinian delusion" that nonhuman animals were created for human use. In his 1899 book Better-World Philosophy, Moore argued that human ethics were still anthropocentric, having developed to include various human groups but not animals. He proposed "zoocentricism" as a further development, extending ethical concern to the entire sentient universe. In his 1906 book The Universal Kinship, Moore criticized what he described as a "provincialist" attitude leading to animal mistreatment, comparing it to denying ethical relations among human groups.: 276 He rejected what he saw as a human-centric perspective and urged consideration of the standpoint of animal victims.: 304 Moore concluded that the Golden Rule should apply to all sentient beings, advocating equal ethical consideration for animals and humans:: 327 [D]o as you would be done by, and not to the dark man and the white woman alone, but to the sorrel horse and the gray squirrel as well; not to creatures of your own anatomy only, but to all creatures. The term speciesism, and the argument that it is a prejudice, first appeared in 1970 in a privately printed pamphlet written by British psychologist Richard D. Ryder. Ryder was a member of a group of academics in Oxford, England, the nascent animal rights community, now known as the Oxford Group. One of the group's activities was distributing pamphlets about areas of concern; the pamphlet titled "Speciesism" was written to protest against animal experimentation. The term was intended by its proponents to create a rhetorical and categorical link to racism and sexism. Ryder stated in the pamphlet that "[s]ince Darwin, scientists have agreed that there is no 'magical' essential difference between humans and other animals, biologically-speaking. Why then do we make an almost total distinction morally? If all organisms are on one physical continuum, then we should also be on the same moral continuum." He wrote that, at that time in the United Kingdom, 5,000,000 animals were being used each year in experiments, and that attempting to gain benefits for our own species through the mistreatment of others was "just 'speciesism' and as such it is a selfish emotional argument rather than a reasoned one". Ryder used the term again in an essay, "Experiments on Animals", in Animals, Men and Morals (1971), a collection of essays on animal rights edited by philosophy graduate students Stanley and Roslind Godlovitch and John Harris, who were also members of the Oxford Group. Ryder wrote: In as much as both "race" and "species" are vague terms used in the classification of living creatures according, largely, to physical appearance, an analogy can be made between them. Discrimination on grounds of race, although most universally condoned two centuries ago, is now widely condemned. Similarly, it may come to pass that enlightened minds may one day abhor "speciesism" as much as they now detest "racism". The illogicality in both forms of prejudice is of an identical sort. If it is accepted as morally wrong to deliberately inflict suffering upon innocent human creatures, then it is only logical to also regard it as wrong to inflict suffering on innocent individuals of other species. ... The time has come to act upon this logic. The term was popularized by the Australian philosopher Peter Singer in his book Animal Liberation (1975). Singer had known Ryder from his own time as a graduate philosophy student at Oxford. He credited Ryder with having coined the term and used it in the title of his book's fifth chapter: "Man's Dominion ... a short history of speciesism", defining it as "a prejudice or attitude of bias in favour of the interests of members of one's own species and against those of members of other species": Racists violate the principle of equality by giving greater weight to the interests of members of their own race when there is a clash between their interests and the interests of those of another race. Sexists violate the principle of equality by favouring the interests of their own sex. Similarly, speciesists allow the interests of their own species to override the greater interests of members of other species. The pattern is identical in each case. Singer stated from a preference-utilitarian perspective, writing that speciesism violates the principle of equal consideration of interests, the idea based on Jeremy Bentham's principle: "each to count for one, and none for more than one". Singer stated that, although there may be differences between humans and nonhumans, they share the capacity to suffer, and we must give equal consideration to that suffering. Any position that allows similar cases to be treated in a dissimilar fashion fails to qualify as an acceptable moral theory. The term caught on; Singer wrote that it was an awkward word but that he could not think of a better one. It became an entry in the Oxford English Dictionary in 1985, defined as "discrimination against or exploitation of animal species by human beings, based on an assumption of mankind's superiority." In 1994 the Oxford Dictionary of Philosophy offered a wider definition: "By analogy with racism and sexism, the improper stance of refusing respect to the lives, dignity, or needs of animals of other than the human species." The French-language journal Cahiers antispécistes ("Antispeciesist notebooks") was founded in 1991, by David Olivier, Yves Bonnardel and Françoise Blanchon, who were the first French activists to speak out against speciesism. The aim of the journal was to disseminate anti-speciesist ideas in France and to encourage debate on the topic of animal ethics, specifically on the difference between animal liberation and ecology. Estela Díaz and Oscar Horta assert that in Spanish-speaking countries, unlike English-speaking countries, anti-speciesism has become the dominant approach for animal advocacy. In Italy, the contemporary anti-speciesist movement has two main approaches: one that takes a strong, radical stance against the dominant societal norms represented by authors such as Adriano Fragano, author of the "Antispeciesist Manifesto", and another that aligns more with mainstream, neoliberal views. In the 21st century, animal rights groups such as the Farm Animal Rights Movement and People for the Ethical Treatment of Animals have attempted to popularize the concept by promoting a World Day Against Speciesism on 5 June. The World Day for the End of Speciesism (WoDES) is a similar annual observance held at the end of August. The WoDES has been held annually since 2015. Social psychology and relationship with other prejudices Scholars including philosopher Peter Singer and botanist Brent Mishler have argued that speciesism is analogous to racism, the belief that some human races are superior to others. In the 2019 book Why We Love and Exploit Animals, Kristof Dhont, Gordon Hodson, Ana C. Leite, and Alina Salmen reveal the psychological connections between speciesism and other prejudices such as racism and sexism. Marjetka Golež Kaučič connects racism and speciesism saying that discriminations based on race and species are strongly interrelated, with human rights providing the legal ground for the development of the animal rights. Kaučič further argues that racism and speciesism are further connected to issues of freedom, both collective and individual. In one study, 242 participants responded to questions on the Speciesism Scale, and those who scored higher on this scale scored higher on racism, sexism, and homophobia scales. Other studies suggest that those who support animal exploitation also tend to endorse racist and sexist views, furthering the beliefs in human supremacy and group dominance in order to justify systems of inequality and oppression. It is suggested that the connection rests in the ideology of social dominance. Psychologists have also considered examining speciesism as a specific psychological construct or attitude (as opposed to speciesism as a philosophy), which was achieved using a specifically designed Likert scale. Studies have found that speciesism is a stable construct that differs amongst personalities and correlates with other variables. For example, speciesism has been found to have a weak positive correlation with homophobia and right-wing authoritarianism, as well as slightly stronger correlations with political conservatism, racism and system justification. Moderate positive correlations were found with social dominance orientation and sexism. Social dominance orientation was theorised to be underpinning most of the correlations; controlling for social dominance orientation reduces all correlations substantially and renders many statistically insignificant. Speciesism likewise predicts levels of prosociality toward animals and behavioural food choices. Those who state that speciesism is unfair to individuals of nonhuman species have often invoked mammals and chickens in the context of research or farming. There is not yet a clear definition or line agreed upon by a significant segment of the movement as to which species are to be treated equally with humans or in some ways additionally protected: mammals, birds, reptiles, arthropods, insects, bacteria, etc. This question is all the more complex since a study by Miralles et al. (2019) has brought to light the evolutionary component of human empathic and compassionate reactions and the influence of anthropomorphic mechanisms in our affective relationship with the living world as a whole: the more an organism is evolutionarily distant from us, the less we recognize ourselves in it and the less we are moved by its fate. Some researchers have suggested that since speciesism could be considered, in terms of social psychology, a prejudice (defined as "any attitude, emotion, or behaviour toward members of a group, which directly or indirectly implies some negativity or antipathy toward that group"), then laypeople may be aware of a connection between it and other forms of "traditional" prejudice. Research suggests laypeople do indeed tend to infer similar personality traits and beliefs from a speciesist that they would from a racist, sexist or homophobe. However, it is not clear if there is a link between speciesism and non-traditional forms of prejudice such as negative attitudes towards the overweight or towards Christians. Psychological studies have furthermore argued that people tend to "morally value individuals of certain species less than others even when beliefs about intelligence and sentience are accounted for". One study identified that there are age-related differences in moral views of animal worth, with children holding less speciesist beliefs than adults; the authors argue that such findings indicate that the development of speciesist beliefs is socially constructed over an individual's lifetime. Relationship with the animal–industrial complex Piers Beirne considers speciesism as the ideological anchor of the intersecting networks of the animal–industrial complex, such as factory farms, vivisection, hunting and fishing, zoos and aquaria, and wildlife trade. Amy Fitzgerald and Nik Taylor argue that the animal-industrial complex is both a consequence and cause of speciesism, which according to them is a form of discrimination similar to racism or sexism. They also argue that the obfuscation of meat's animal origins is a critical part of the animal–industrial complex under capitalist and neoliberal regimes. Speciesism results in the belief that humans have the right to use non-human animals, which is pervasive in the modern society. Sociologist David Nibert states, The profound cultural devaluation of other animals that permits the violence that underlies the animal industrial complex is produced by far-reaching speciesist socialization. For instance, the system of primary and secondary education under the capitalist system largely indoctrinates young people into the dominant societal beliefs and values, including a great deal of procapitalist and speciesist ideology. The devalued status of other animals is deeply ingrained; animals appear in schools merely as caged "pets", as dissection and vivisection subjects, and as lunch. On television and in movies, the unworthiness of other animals is evidenced by their virtual invisibility; when they do appear, they generally are marginalized, vilified, or objectified. Not surprisingly, these and numerous other sources of speciesism are so ideologically profound that those who raise compelling moral objections to animal oppression largely are dismissed, if not ridiculed.: 208 Some scholars have argued that all kinds of animal production are rooted in speciesism, reducing animals to mere economic resources.: 422 Built on the production and slaughter of animals, the animal–industrial complex is perceived as the materialization of the institution of speciesism, with speciesism becoming "a mode of production".: 422 In his 2011 book Critical Theory and Animal Liberation, J. Sanbonmatsu argues that speciesism is not ignorance or the absence of a moral code towards animals, but is a mode of production and material system imbricated with capitalism.: 420 Arguments in favor Philosopher Carl Cohen stated in 1986: "Speciesism is not merely plausible; it is essential for right conduct, because those who will not make the morally relevant distinctions among species are almost certain, in consequence, to misapprehend their true obligations." Cohen writes that racism and sexism are wrong because there are no relevant differences between the sexes or races. Between people and animals, he states, there are significant differences; his view is that animals do not qualify for Kantian personhood, and as such have no rights. Nel Noddings, the American feminist, has criticized Singer's concept of speciesism for being simplistic, and for failing to take into account the context of species preference, as concepts of racism and sexism have taken into account the context of discrimination against humans. Peter Staudenmaier has stated that comparisons between speciesism and racism or sexism are trivializing: The central analogy to the civil rights movement and the women's movement is trivializing and ahistorical. Both of those social movements were initiated and driven by members of the dispossessed and excluded groups themselves, not by benevolent men or white people acting on their behalf. Both movements were built precisely around the idea of reclaiming and reasserting a shared humanity in the face of a society that had deprived it and denied it. No civil rights activist or feminist ever argued, "We're sentient beings too!" They argued, "We're fully human too!" Animal liberation doctrine, far from extending this humanist impulse, directly undermines it. A similar argument was made by Bernard Williams, who observed that a difference between speciesism versus racism and sexism is that racists and sexists deny any input from those of a different race or sex when it comes to questioning how they should be treated. Conversely, when it comes to how animals should be treated by humans, Williams observed that it is only possible for humans to discuss that question. Williams observed that being a human being is often used as an argument against discrimination on the grounds of race or sex, whereas racism and sexism are seldom deployed to counter discrimination. Williams also stated in favour of speciesism (which he termed 'humanism'), arguing that "Why are fancy properties which are grouped under the label of personhood "morally relevant" to issues of destroying a certain kind of animal, while the property of being a human being is not?" Williams states that to respond by arguing that it is because these are properties considered valuable by human beings does not undermine speciesism as humans also consider human beings to be valuable, thus justifying speciesism. Williams then states that the only way to resolve this would be by arguing that these properties are "simply better" but in that case, one would need to justify why these properties are better if not because of human attachment to them. Christopher Grau supported Williams, arguing that if one used properties like rationality, sentience and moral agency as criteria for moral status as an alternative to species-based moral status, then it would need to be shown why these particular properties are to be used instead of others; there must be something that gives them special status. Grau states that to claim these are simply better properties would require the existence of an impartial observer, an "enchanted picture of the universe", to state them to be so. Thus, Grau states that such properties have no greater justification as criteria for moral status than being a member of a species does. Grau also states that even if such an impartial perspective existed, it still would not necessarily be against speciesism, since it is entirely possible that there could be reasons given by an impartial observer for humans to care about humanity. Grau then further observes that if an impartial observer existed and valued only minimalizing suffering, it would likely be overcome with horror at the suffering of all individuals and would rather have humanity annihilate the planet than allow it to continue. Grau thus concludes that those endorsing the idea of deriving values from an impartial observer do not seem to have seriously considered the conclusions of such an idea. Douglas Maclean agreed that Singer raised important questions and challenges, particularly with his argument from marginal cases. However, Maclean questioned if different species can be fitted with human morality, observing that animals were generally held exempt from morality; Maclean notes that most people would try to stop a man kidnapping and killing a woman but would regard a hawk capturing and killing a marmot with awe and criticise anyone who tried to intervene. Maclean thus suggests that morality only makes sense under human relations, with the further one gets from it, the less it can be applied. The British philosopher Roger Scruton regards the emergence of the animal rights and anti-speciesism movement as "the strangest cultural shift within the liberal worldview", because the idea of rights and responsibilities is, he states, distinctive to the human condition, and it makes no sense to spread them beyond our own species. Scruton argues that if animals have rights, then they also have duties, which animals would routinely violate, such as by breaking laws or killing other animals. He accuses anti-speciesism advocates of "pre-scientific" anthropomorphism, attributing traits to animals that are, he says, Beatrix Potter-like, where "only man is vile". It is, he states, a fantasy, a world of escape. Thomas Wells states that Singer's call for ending animal suffering would justify simply exterminating every animal on the planet in order to prevent the numerous ways in which they suffer, as they could no longer feel any pain. Wells also stated that by focusing on the suffering humans inflict on animals and ignoring suffering animals inflict upon themselves or that inflicted by nature, Singer is creating a hierarchy where some suffering is more important than others, despite claiming to be committed to equality of suffering. Wells also states that the capacity to suffer, Singer's criteria for moral status, is one of degree rather than absolute categories; Wells observes that Singer denies moral status to plants on the grounds they cannot subjectively feel anything (even though they react to stimuli), yet Wells alleges there is no indication that nonhuman animals feel pain and suffering the way humans do. Robert Nozick notes that if species membership is irrelevant, then this would mean that endangered animals have no special claim. The Rev. John Tuohey, founder of the Providence Center for Health Care Ethics, writes that the logic behind the anti-speciesism critique is flawed, and that, although the animal rights movement in the United States has been influential in slowing animal experimentation, and in some cases halting particular studies, no one has offered a compelling argument for species equality. Arguments against Paola Cavalieri writes that the current humanist paradigm is that only human beings are members of the moral community and that all are worthy of equal protection. Species membership, she writes, is ipso facto moral membership. The paradigm has an inclusive side (all human beings deserve equal protection) and an exclusive one (only human beings have that status). Nonhumans do possess some moral status in many societies, but it generally extends only to protection against what Cavalieri calls "wanton cruelty". Anti-speciesists state that the extension of moral membership to all humanity, regardless of individual properties such as intelligence, while denying it to nonhumans, also regardless of individual properties, is internally inconsistent. According to the argument from marginal cases, if infants, the senile, the comatose, and the cognitively disabled (marginal-case human beings) have a certain moral status, then nonhuman animals must be awarded that status too since there is no morally relevant ability that the marginal-case humans have that nonhumans lack. American legal scholar Steven M. Wise states that speciesism is a bias as arbitrary as any other. He cites the philosopher R.G. Frey (1941–2012), a leading animal rights critic, who wrote in 1983 that, if forced to choose between abandoning experiments on animals and allowing experiments on "marginal-case" humans, he would choose the latter, "not because I begin a monster and end up choosing the monstrous, but because I cannot think of anything at all compelling that cedes all human life of any quality greater value than animal life of any quality." Richard Dawkins, the evolutionary biologist, wrote against speciesism in The Blind Watchmaker (1986), The Great Ape Project (1993), and The God Delusion (2006), elucidating the connection with evolutionary theory. He compares former racist attitudes and assumptions to their present-day speciesist counterparts. In the chapter "The one true tree of life" in The Blind Watchmaker, he states that it is not only zoological taxonomy that is saved from awkward ambiguity by the extinction of intermediate forms but also human ethics and law. Dawkins states that what he calls the "discontinuous mind" is ubiquitous, dividing the world into units that reflect nothing but our use of language, and animals into discontinuous species: The director of a zoo is entitled to "put down" a chimpanzee that is surplus to requirements, while any suggestion that he might "put down" a redundant keeper or ticket-seller would be greeted with howls of incredulous outrage. The chimpanzee is the property of the zoo. Humans are nowadays not supposed to be anybody's property, yet the rationale for discriminating against chimpanzees is seldom spelled out, and I doubt if there is a defensible rationale at all. Such is the breathtaking speciesism of our Christian-inspired attitudes, the abortion of a single human zygote (most of them are destined to be spontaneously aborted anyway) can arouse more moral solicitude and righteous indignation than the vivisection of any number of intelligent adult chimpanzees! ... The only reason we can be comfortable with such a double standard is that the intermediates between humans and chimps are all dead. Dawkins elaborated in a discussion with Singer at The Center for Inquiry in 2007 when asked whether he continues to eat meat: "It's a little bit like the position which many people would have held a couple of hundred years ago over slavery. Where lots of people felt morally uneasy about slavery but went along with it because the whole economy of the South depended upon slavery." "Libertarian extension" is the idea that the intrinsic value of nature can be extended beyond sentient beings. This seeks to apply the principle of individual rights not only to all animals but also to objects without a nervous system such as trees, plants, and rocks. Ryder rejects this argument, writing that "value cannot exist in the absence of consciousness or potential consciousness. Thus, rocks and rivers and houses have no interests and no rights of their own. This does not mean, of course, that they are not of value to us, and to many other [beings who experience pain], including those who need them as habitats and who would suffer without them." David Sztybel states in his paper, "Can the Treatment of Animals Be Compared to the Holocaust?" (2006), that the racism of the Nazis is comparable to the speciesism inherent in eating meat or using animal by-products, particularly those produced on factory farms. Y. Michael Barilan, an Israeli physician, states that speciesism is not the same thing as Nazi racism, because the latter extolled the abuser and condemned the weaker and the abused. He describes speciesism as the recognition of rights on the basis of group membership, rather than solely on the basis of moral considerations. Law and policy The first major statute addressing animal protection in the United States, titled "An Act for the More Effectual Prevention of Cruelty to Animals", was enacted in 1867. It provided the right to incriminate and enforce protection with regards to animal cruelty. The act, which has since been revised to suit modern cases state by state, originally addressed such things as animal neglect, abandonment, torture, fighting, transport, impound standards and licensing standards. Although an animal rights movement had already started as early as the late 1800s, some of the laws that would shape the way animals would be treated as industry grew, were enacted around the same time that Richard Ryder was bringing the notion of Speciesism to the conversation. Legislation was being proposed and passed in the U.S. that would reshape animal welfare in industry and science. Bills such as Humane Slaughter Act, which was created to alleviate some of the suffering felt by livestock during slaughter, was passed in 1958. Later the Animal Welfare Act of 1966, passed by the 89th United States Congress and signed into law by President Lyndon B. Johnson, was designed to put much stricter regulations and supervisions on the handling of animals used in laboratory experimentation and exhibition but has since been amended and expanded. These groundbreaking laws foreshadowed and influenced the shifting attitudes toward nonhuman animals in their rights to humane treatment which Richard D. Ryder and Peter Singer would later popularize in the 1970s and 1980s.[citation needed] Great ape personhood is the idea that the attributes of non-human great apes are such that their sentience and personhood should be recognized by the law, rather than simply protecting them as a group under animal cruelty legislation. Awarding personhood to nonhuman primates would require that their individual interests be taken into account. Observances The World Day for the End of Speciesism (WoDES) is an international event aimed at denouncing speciesism, held annually at the end of August since 2015. The observance was initiated in 2015 by members of the Swiss association Pour l'Egalité Animale (PEA), which coordinates the international day annually, providing support aids. The "World Day Against Speciesism" is observed annually on June 5. Films and television series with themes around speciesism See also References Sources Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Hadatha] | [TOKENS: 858]
Contents Hadatha Hadatha, also El Hadetheh or Hadateh, was a Palestinian Arab village in the District of Tiberias, located 12.5 km southwest of Tiberias. It was depopulated in the 1947–1948 civil war in Mandatory Palestine. History Ceramics from the late Roman and Byzantine era have been found. According to tradition, Hadatha was one of the "Al-Hija" villages named after Emir Hussam al-Din Abu al-Hija. Abu al-Hija ("the Daring") was a Kurdish commander that partook in Sultan Saladin's conquest (1187–93) of the Crusader Kingdom of Jerusalem. He was renowned for his bravery, and commanded the garrison of Acre at the time of the Siege of Acre (1189–1192). Abu al-Hija apparently returned to Iraq, but several members of his family remained in the country under orders from Saladin, and these family members settled on large tracts of land that they were given in the Carmel region, in the Lower, Eastern and Western Galilee, and in the Hebron Highlands. Self-proclaimed kinsmen of al-Hija settled in the villages of Hadatha and Sirin in the Lower Galilee, and Ruweis and Kawkab in the Western Galilee. By tradition the descendants today still claim to be blood relations of al-Hija. In 1596, Hadatha was part of the Ottoman Empire, and the tax register of that year revealed a population of 121. All the villagers were Muslim. A map from Napoleon's invasion of 1799 by Pierre Jacotin showed the place, named as El Hadaci. Victor Guérin, who visited in 1875, noted: "Some of the houses, which are still inhabited, have been constructed of good cut stones taken from some old buildings and mixed with small materials. On the slopes of the hill are found some ten shafts of columns lying scattered about the ground. They are the remains of a monument totally destroyed". In 1881, the PEF's Survey of Western Palestine (SWP) described El Hadetheh as: "Stone village, containing 250 Moslems, on cultivated plain, growing barley, etc. No trees or gardens near. Good spring of water and cisterns in the village". They further noted that there was a "Spring on south-east side; good supply of water, perennial; a small stream flowing from it in winter and spring." A population list from about 1887 showed el Hadatheh to have about 1,100 inhabitants; all Muslims. In the 1922 census of Palestine, conducted by the British Mandate authorities, Hadatheh had a population of 333, all Muslim, increasing in the 1931 census to 368; 1 Christian, 1 Druze and 366 Muslims, in a total of 75 houses. By the 1945 statistics, the village population was 520 Muslims, and the total land area was 10,310 dunams (10.31 km2; 3.98 sq mi). 199 dunams (0.199 km2; 0.077 sq mi) were irrigated or used for orchards, 8,379 dunams (8.379 km2; 3.235 sq mi) were used for cereals, and 38 dunams (0.038 km2; 0.015 sq mi) were built-up (urban) land. According to Morris, the village was abandoned during the 1947–1948 Civil War in Mandatory Palestine on May 12, 1948, under the orders of the Arab Higher Committee. However, Khalidi noted an inconsistency in the account, since the History of Haganah wrote that "the inhabitants fled in fear of the Jews". In 1992, it was noted that though there were no settlements on village land, the inhabitants of Kefar Qish were cultivating the surrounding lands. A number of Hadatha's dispossessed inhabitants resettled in Tamra, near Acre, during the 1950s. References Bibliography External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Ten_Lost_Tribes] | [TOKENS: 7564]
Contents Ten Lost Tribes The Ten Lost Tribes were those from the Twelve Tribes of Israel that were said to have been exiled from the Kingdom of Israel after it was conquered by the Neo-Assyrian Empire around 720 BCE. They were the following tribes: Reuben, Simeon, Dan, Naphtali, Gad, Asher, Issachar, Zebulun, Manasseh, and Ephraim – all but Judah and Benjamin, both of which were based in the neighbouring Kingdom of Judah, and therefore survived until the Babylonian siege of Jerusalem in 587 BCE. Alongside Judah and Benjamin was part of the Tribe of Levi, which was not allowed land tenure, but received dedicated cities. The exile of Israel's population, known as the Assyrian captivity, was an instance of the long-standing resettlement policy of the Neo-Assyrian Empire implemented in many subjugated territories. The Jewish historian Josephus wrote that "there are but two tribes in Asia and Europe subject to the Romans, while the ten tribes are beyond Euphrates till now, and are an immense multitude, and not to be estimated by numbers." In the 7th and 8th centuries CE, the return of the Ten Lost Tribes was associated with the concept of the coming of the Hebrew Messiah.: 58–62 Claims of descent from the "lost tribes" have been proposed in relation to many groups, and some Abrahamic religions espouse a messianic view that Israel's tribes will return. According to contemporary research, Transjordan and Galilee did witness large-scale deportations, and entire tribes were lost. Historians have generally concluded that the deported tribes assimilated into their new local populations. In Samaria many Israelites survived the Assyrian onslaught and remained in the land, eventually coming to be known as the Samaritan people. Persistent claims have been made that some of the lost tribes survived as distinct entities. Zvi Ben-Dor Benite, a professor of Middle Eastern history at New York University, states: "The fascination with the tribes has generated, alongside ostensibly nonfictional scholarly studies, a massive body of fictional literature and folktale.": 11 Anthropologist Shalva Weil has documented various differing tribes and peoples claiming affiliation to the Ten Lost Tribes throughout the world. Scriptural basis The scriptural basis for the idea of lost tribes is 2 Kings 17:6: "In the ninth year of Hoshea, the king of Assyria took Samaria, and carried Israel away unto Assyria, and placed them in Halah, and in Habor, on the river of Gozan, and in the cities of the Medes." According to the Bible, the Kingdom of Israel and Kingdom of Judah were the successor states to the older United Monarchy of Israel. The Kingdom of Israel came into existence c. 930 BCE after the northern tribes of Israel rejected Solomon's son Rehoboam as their king. Ten tribes formed the Kingdom of Israel: the tribes of Reuben, Issachar, Zebulun, Dan, Naphtali, Gad, Asher, Ephraim, Simeon and Manasseh. However it is not clear how Simeon, whose territory was within the Judean territory, could ever have been a part of the northern kingdom. Also the territory of Asher was basically Phoenician and Reuben was mostly overlapping with Moabite territory. The tribes of Judah and Benjamin remained loyal to Rehoboam, and formed the Kingdom of Judah. In addition, members of the Tribe of Levi were located in cities in both kingdoms. According to 2 Chronicles 15:9, members of the tribes of Ephraim, Manasseh, and Simeon fled to Judah during the reign of Asa of Judah (c. 911–870 BCE). In c. 732 BCE, the Assyrian king Tiglath-Pileser III sacked Damascus and Israel, annexing Aramea and territory of the tribes of Reuben, Gad and Manasseh in Gilead including the desert outposts of Jetur, Naphish, and Nodab. People from these tribes were taken captive and resettled in the region surrounding the Khabur River. Tiglath-Pilesar also captured the territory of Naphtali and the city of Janoah in Ephraim, and an Assyrian governor was placed over the region of Naphtali. According to 2 Kings 16:9 and 15:29, the population of Aram and the annexed part of Israel was deported to Assyria. Israel Finkelstein estimated that only a fifth of the population (about 40,000) were actually resettled out of the area during the two deportation periods under Tiglath-Pileser III, Shalmaneser V, and Sargon II.[page needed] Many also fled south to Jerusalem, which appears to have expanded in size fivefold during this period, requiring a new wall to be built, and a new source of water (Siloam) to be provided by King Hezekiah. Furthermore, 2 Chronicles 30:1–11 explicitly mentions northern Israelites who had been spared by the Assyrians—in particular, members of Dan, Ephraim, Manasseh, Asher, and Zebulun—and how members of the latter three returned to worship at the Temple in Jerusalem at that time. According to historian Zvi Ben-Dor Benite: Centuries after their disappearance, the ten lost tribes sent an indirect but vital sign ... In 2 Esdras, we read about the ten tribes and "their long journey through that region, which is called Arzareth" ... The book of the "Vision of Ezra", or Esdras, was written in Hebrew or Aramaic by a Jew in Israel sometime before the end of the first century CE, shortly after the destruction of the temple by the Romans [in 70 CE]. It is one of a group of texts later designated as the so-called Apocrypha—pseudoepigraphal books – attached to but not included in the Hebrew biblical canon.: 57 In Second [also called Fourth] Esdras, 13:39–47: 39And as for your seeing him [a man seen in a vision] gather to himself another multitude that was peaceable, 40these are the ten tribes which were led away from their own land into captivity in the days of King Hoshea, whom Shalmaneser, the king of the Assyrians, led captive; he took them across the river, and they were taken to another land. 41But they formed this plan for themselves, that they would leave the multitude of the nations and go to a more distant region, where mankind had never lived, 42that there at least they might keep their statutes which they had not kept in their own land. 43And they went in by the narrow passages of the Euphrates river. 44For at that time the Most High performed signs for them, and stopped the channels of the river until they had passed over. 45Through that region there was a long way to go, a journey of a year and a half; and that country is called Arzareth. 46Then they dwelt there until the last times; and now, when they are about to come again, 47the Most High will stop the channels of the river again, so that they may be able to pass over. In Second Baruch, also called the Syriac Apocalypse of Baruch, 77:17–78:4: 77:17But, as you asked me, I will write a letter to your brothers in Babylon, and I will send it by the hands of men; and I will write also a similar letter to the nine and a half tribes, and send it by means of a bird. 18And on the twenty-first day of the eighth month, I, Baruch, came and sat down under the oak in the shade of its branches, and no one was with me – I was alone. 19And I wrote two letters: one I sent by eagle to the nine and a half tribes; and the other I sent to those that were in Babylon by the hands of three men. 20And I called the eagle and said to it, 21"The Most High created you to be the king of all the birds. 22Go now: stop nowhere on your journey: neither look for any roosting place, not settle on any tree, till you have crossed the broad waters of the river Euphrates, ands come to the people who dwell there, and laid this letter at their feet." [....] 78:1This is the letter that Baruch, the son of Neriah, sent to the nine and a half tribes, which were across the river Euphrates, in which these things were written. 2"Baruch, the son of Neriah, to his brothers in captivity, Mercy and peace to you. 3I can never forget, my brothers, the love of him who created us, who loved us from the beginning and never hated us, but rather subjected us to discipline. 4Nor can I forget that all we of the twelve tribes are united by a common bond, inasmuch as we are descended from a single father. [....]" The story of Anna on the occasion of the Presentation of Jesus at the Temple in the New Testament names her as being of the (lost) tribe of Asher (Luke 2:36). Views The Talmud debates whether or not the ten lost tribes will eventually be reunited with the Tribe of Judah; that is, with the Jewish people: The ten tribes will not eventually return, as is said: "He sent them to another land as it is this day" (Deuteronomy 29:27), just as the day departs and does not return, similarly they depart and do not return – according to Rabbi Akiva. Rabbi Eliezer says: "as it is this day" – just as this day grows dark and then bright again, so too the ten tribes who have been darkened will eventually be brightened [i.e. they will return]. ... Rabbi Shimon ben Yehuda of the village of Akko says in the name of Rabbi Shimon: If their deeds remain "as this day" [i.e. they continue to sin], they will not return; otherwise they shall return. An Ashkenazi Jewish legend speaks of these tribes as Die Roite Yiddelech, "the red Jews", who were cut off from the rest of Jewry by the legendary river Sambation, "whose foaming waters raise high up into the sky a wall of fire and smoke that is impossible to pass through." To varying degrees, Apocryphal accounts concerning the Lost Tribes, based on biblical accounts, have been produced by Jews and Christians since at least the 17th century.: 59 An increased currency of tales relating to lost tribes that occurred in the 17th century was due to the confluence of several factors. According to Tudor Parfitt: As Michael Pollack shows, Menasseh's argument was based on "three separate and seemingly unrelated sources: a verse from the book of Isaiah, Matteo Ricci's discovery of an old Jewish community in the heart of China and Antonio Montezinos' reported encounter with members of the Lost Tribes in the wilds of South America".: 69 In 1649, Menasseh ben Israel published his book, The Hope of Israel, in Spanish and Latin in Amsterdam; it included Antonio de Montezinos' account of the Lost Tribes in the New World. An English translation was published in London in 1650. In it, Menasseh argued that the native inhabitants of America which were encountered at the time of the European discovery were actually the descendants of the [lost] Ten Tribes of Israel and for the first time, he tried to gain support for the theory from European thinkers and publishers. Menasseh noted how important Montezinos' account was, for the Scriptures do not tell what people first inhabited those Countries; neither was there mention of them by any, til Christop. Columbus, Americus, Vespacius [sic], Ferdinandus, Cortez [sic], the Marquesse Del Valle [sic], and Franciscus Pizarrus [sic] went thither ... He wrote on 23 December 1649: "I think that the Ten Tribes live not only there ... but also in other lands scattered everywhere; these never did come back to the Second Temple and they keep till this day still the Jewish Religion ...": 118 According to the Book of Mormon, two families of Nephites escaped from Israel circa 600 BC shortly before the sacking of Jerusalem by Nebuchadnezzar, constructed a ship, sailed across the ocean, and arrived in the Americas in the Pre-Columbian era. These Nephites are among the ancestors of Native American tribes and possibly also the Polynesians. Adherents believe the two founding tribes were called Nephites and Lamanites, that the Nephites obeyed the Law of Moses, practiced Christianity, and that the Lamanites were rebellious. The Book of Mormon claims that the Nephites and Lamanites were who Jesus Christ was referring to when he taught, "And other sheep I have, which are not of this fold: them also I must bring, and they shall hear my voice; and there shall be one fold, and one shepherd." Eventually the Lamanites wiped out the Nephites around 400 CE, and they are among the ancestors of Native Americans. The Book of Mormon claims that other groups of Israelites, besides the Nephites, were led away by God from the time of the Exodus through the reign of King Zedekiah, and that Jesus Christ also visited them after His resurrection. Latter-Day Saints believe the ancient accounts of Quetzalcoatl and Shangdi, among others, support this doctrine. The Church of Jesus Christ of Latter-day Saints (LDS Church) believes in the literal gathering of Israel, and as of 2006 the Church actively preached the gathering of people from the twelve tribes. "Today Israelites are found in all countries of the world. Many of these people do not know that they are descended from the ancient house of Israel," the church teaches in its basic Gospel Principles manual. "The Lord promised that His covenant people would someday be gathered ... God gathers His children through missionary work. As people come to a knowledge of Jesus Christ, receiving the ordinances of salvation and keeping the associated covenants, they become 'the children of the covenant' (3 Nephi 20:26)." The church also teaches that The power and authority to direct the work of gathering the house of Israel was given to Joseph Smith by the prophet Moses, who appeared in 1836 in the Kirtland Temple. ... The Israelites are to be gathered spiritually first and then physically. They are gathered spiritually as they join The Church of Jesus Christ of Latter-day Saints and make and keep sacred covenants. ... The physical gathering of Israel means that the covenant people will be 'gathered home to the lands of their inheritance, and shall be established in all their lands of promise' (2 Nephi 9:2). The tribes of Ephraim and Manasseh will be gathered in the Americas. The tribe of Judah will return to the city of Jerusalem and the area surrounding it. The ten lost tribes will receive from the tribe of Ephraim their promised blessings (see D&C 133:26–34). ... The physical gathering of Israel will not be complete until the Second Coming of the Savior and on into the Millennium (see Joseph Smith—Matthew 1:37). One of their main Articles of Faith, which was written by Joseph Smith, is as follows: "We believe in the literal gathering of Israel and in the restoration of the Ten Tribes; that Zion (the New Jerusalem) will be built upon the American continent; that Christ will reign personally upon the earth; and, that the earth will be renewed and receive its paradisiacal glory." (LDS Articles of Faith #10) Regarding the Ezekiel 37 prophecy, the church teaches that the Book of Mormon is the stick of Ephraim (or Joseph) mentioned and that the Bible is the stick of Judah, thus comprising two witnesses for Jesus Christ. The church believes the Book of Mormon to be a collection of records by prophets of the ancient Americas, written on plates of gold and translated by Joseph Smith c. 1830. The church considers the Book of Mormon one of the main tools for the spiritual gathering of Israel. Some scholars suggest that while deportations took place both before and after the destruction of Israel (722–720 BCE), they were less significant than a cursory reading of the Bible's account of them indicates. During the earlier Assyrian invasions, the Transjordan and the Galilee did witness large-scale deportations, and entire tribes were lost; the tribes of Reuben, Gad, Dan, and Naphtali are never mentioned again. The region of Samaria, on the other hand, was larger and more populous. Two of the region's largest cities, Samaria and Megiddo, were mostly left intact, and the rural communities were generally left alone. Additionally, according to the Book of Chronicles, King Hezekiah of Judah invited the survivors of Ephraim, Zebulun, Asher, Issachar and Manasseh to Jerusalem to celebrate Passover. Therefore, it is assumed that the majority of people who survived the Assyrian invasions remained in the area. According to researchers, the Samaritan community of today, which claims to be descended from Ephraim, Manasseh, Levi, and, up until 1968, also Benjamin, does in fact predominantly derive from the tribes that continued to live in the region. It has been proposed that some Israelites joined the southern tribes in the Kingdom of Judah; however, this theory is debated. The Israelites who were deported are thought to have assimilated with the local populace. For instance, the New Standard Jewish Encyclopedia states: "In historic fact, some members of the Ten Tribes remained in the land of Israel, where apart from the Samaritans some of their descendants long preserved their identity among the Jewish population, others were assimilated, while others were presumably absorbed by the last Judean exiles who in 597–586 BCE were deported to Assyria ... Unlike the Judeans of the southern Kingdom, who survived a similar fate 135 years later, they soon assimilated". Search The enduring mysteries which surround the disappearance of the tribes later became sources of numerous (largely mythological) narratives in recent centuries, with historian Tudor Parfitt arguing that "this myth is a vital feature of colonial discourse throughout the long period of European overseas empires, from the beginning of the fifteenth century, until the later half of the twentieth".: 1, 225 Along with Prester John, they formed an imaginary guide for exploration and contact with uncontacted and indigenous peoples in the Age of Discovery and colonialism. However, during his other research projects, Parfitt discovered the possible existence of some ethnic links between several older Jewish Diaspora communities in Asia, Africa and the Middle East, especially in those Jewish communities which were established in pre-colonial times. For example, in his Y-DNA studies of males from the Lemba people of Southern Africa, Parfitt found a high proportion of paternal Semitic ancestry, DNA that is common to both Arabs and Jews from the Middle East. During his later genetic studies of the Bene Israel of India, the origins of whom were obscure, he also concluded that they were predominantly descended from males from the Middle East, a conclusion which was largely consistent with their oral histories of their origin. These findings subsequently led other Judaising groups, including the Gogodala tribe of Papua New Guinea, to seek help in determining their own origins. Ethnology and anthropology Expanded exploration and study of groups throughout the world through archaeology and the new field of anthropology in the late 19th century led to a revival or a reworking of accounts of the Lost Tribes. For instance, because the construction of the Mississippian culture's complex earthwork mounds seemed to be beyond the skills of the Native American cultures which European Americans knew about when they discovered them, it was theorized that the ancient civilizations which were involved in the construction of the mounds were linked to the Lost Tribes. The discoverers of the mounds tried to fit the new information which they acquired as the result of their archaeological findings into a biblical construct. However, the earthworks across North America have been conclusively linked to various Native groups, and today, archaeologists consider the theory of non-Native origin pseudo-scientific.[page needed] Groups which claim descent from the Lost Tribes Samaritans consider themselves to be descended from tribes of Ephraim and Manasseh who stayed in their land and kept their religion. The Jewish belief is that Samaritans are descended from foreigners who replaced the exiled northern tribes and took on the customs of natives. Many travelers and researchers have reported that the traditional folklore of Kurdish Jews claims they are descended from the Ten Lost Tribes. According to the memoirs of Benjamin of Tudela and Pethahiah of Regensburg, there were about 100 Jewish settlements and substantial Jewish population in Kurdistan in the 12th century. Benjamin of Tudela also gives the account of David Alroi, the messianic leader from central Kurdistan, who rebelled against the Seljuk Sultan Muktafi and had plans to lead the Jews back to Jerusalem. Among the Pashtuns, there is a tradition of being descended from the exiled lost tribes of Israel. This tradition was referenced in 19th century western scholarship and it was also incorporated in the "Lost Tribes" literature which was popular at that time (notably George Moore's The Lost Tribes of 1861). Recently (2000s), interest in the topic has been revived by the Jerusalem-based anthropologist Shalva Weil, who was quoted in the popular press as stating that the "Taliban may be descended from Jews". The traditions surrounding the Pashtuns being the remote descendants of the "Lost Tribes of Israel" are to be distinguished from the historical existence of the Jewish community in eastern Afghanistan or northwest Pakistan which flourished from about the 7th century to the early 20th century, but has essentially disappeared from the region due to emigration to Israel since the 1950s.[citation needed] According to the Encyclopaedia of Islam, the theory of Pashtun descent from Israelites can be traced to Makhzan-e-Afghani, a history book which was compiled for Khan-e-Jehan Lodhi in the reign of the Mughal Emperor Jehangir in the 17th century. The Pashtuns are a predominantly Sunni Muslim Iranic people, native to southern Afghanistan and northwestern Pakistan, who adhere to an indigenous and pre-Islamic religious code of honor and culture, Pashtunwali. The belief that the Pashtuns are descended from the lost tribes of Israel has never been substantiated by concrete historical evidence. Many members of the Taliban hail from the Pashtun tribes and they do not necessarily disclaim their alleged Israelite descent. In Pashto, the tribal name 'Yusef Zai' means the "sons of Joseph". A number of genetic studies on Jews refute the possibility of a connection, whereas others maintain a link.: 117 In 2010, The Guardian reported that the Israeli government was planning to fund a genetic study to test the veracity of a genetic link between the Pashtuns and the lost tribes of Israel.[needs update] The article stated that "Historical and anecdotal evidence strongly suggests a connection, but definitive scientific proof has never been found. Some leading Israeli anthropologists believe that, of all the many groups in the world which claim to have a connection to the 10 lost tribes, the Pashtuns, or Pathans, have the most compelling case." Some traditions of the Assyrian Jews claim that Israelites of the tribe of Benjamin first arrived in the area of modern Kurdistan after the Neo-Assyrian Empire's conquest of the Kingdom of Israel during the 8th century BCE; they were subsequently relocated to the Assyrian capital. During the first century BCE, the Assyrian royal house of Adiabene—which, according to the Jewish historian Flavius Josephus, was ethnically Assyrian and whose capital was Erbil (Aramaic: Arbala; Kurdish: Hewlêr)—was converted to Judaism. King Monobazes, his queen Helena, and his son and successor Izates are recorded as the first proselytes. The Bene Israel are a community of Jews in the Indian state of Maharashtra, residing particularly in the Konkan region. Since the formation of the State of Israel, thousands of Bene Israelis have made aliyah, although large numbers still remain in India. Since the late 20th century, some tribes in the Indian North-Eastern states of Mizoram and Manipur have been claiming that they are Lost Israelites and they have also been studying Hebrew and Judaism. In 2005, the chief rabbi of Israel ruled that the Bnei Menashe are descended from a lost tribe. Based on the ruling, Bnei Menashe are allowed to immigrate to Israel after they formally convert to Judaism. In 2021, 4,500 Bnei Menashe had made aliyah to Israel; 6,000 Bnei Menashe in India hope to make aliyah.[full citation needed] According to Al-Biruni, the famous 11th-century Persian Muslim scholar: "In former times the inhabitants of Kashmir used to allow one or two foreigners to enter their country, particularly Jews, but at present they do not allow any Hindus whom they do not know personally to enter, much less other people." François Bernier, a 17th-century French physician and Sir Francis Younghusband, who explored this region in the 1800s, commented on the similar physiognomy between Kashmiris and Jews, including "fair skin, prominent noses", and similar head shapes. Baikunth Nath Sharga argues that, despite the etymological similarities between Kashmiri and Jewish surnames, the Kashmiri Pandits are of Indo-Aryan descent while the Jews are of Semitic descent. The Bene Ephraim, also called Telugu Jews, claim descent from the tribe of Ephraim. Since the 1980s, they have learned to practice modern Judaism. They say that they traveled from Israel through western Asia: Persia, Afghanistan, Tibet and into China for 1,600 years before arriving in southern India more than 1,000 years ago. They hold a history which they say is similar to that of the shift of Afghan Jews, Persian Jews, Bene Israel, and Bnei Menashe. The community has been visited over the years by rabbis from the chief rabbinate in Israel to study their Jewish tradition and practices. They have sought recognition from many rabbis around the world, and they always practiced their own oral traditions and customs (caviloth), such as: burying the dead; marrying under a chuppah; observing Shabbat and other Jewish festivals, and maintaining a beit din. However, they adopted some aspects of Christianity after the arrival of British Baptist missionaries during the early 19th century although nominally practicing Judaism. Because of the long period in which the people were not practicing Judaism, they did not develop any distinctly identifiable Judæo-Telugu language as other groups did. The Beta Israel ("House of Israel") are Ethiopian Jews, who were also called "Falashas" in the past. Some members of the Beta Israel, as well as several Jewish scholars, believe that they are descended from the lost Tribe of Dan, as opposed to the traditional account of their origins which claims that they are descended from the Queen of Sheba and the Israelite king Solomon. They have a tradition of being connected to Jerusalem. Early DNA studies showed that they were descended from Ethiopians, but in the 21st century, new studies have shown their possible descent from a few Jews who lived in either the 4th or 5th century CE, possibly in Sudan. The Beta Israel made contact with other Jewish communities in the later 20th century. In 1973 Rabbi Ovadia Yosef, then the Chief Sephardic Rabbi, based on the Radbaz and other accounts, ruled that the Beta Israel were Jews and should be brought to Israel; two years later that opinion was confirmed by a number of other authorities who made similar rulings, including the Chief Ashkenazi Rabbi Shlomo Goren. The Igbo Jews of Nigeria variously claim descent from the tribes of Ephraim, Naphtali, Menasseh, Levi, Zebulun and Gad. The theory, however, does not hold up to historical scrutiny. Historians have examined the historical literature on West Africa from the colonial era and they have elucidated that such theories served diverse functions for the writers who proposed them. The Black Hebrew Israelites are an African American new religious movement which claims that African Americans are the descendants of the Ten Lost Tribes. The group believe that, following their displacement, the Ten Lost Tribes migrated to and settled in West Africa and they were subsequently enslaved and transported to America in the Transatlantic slave trade; where their white slave masters forced them to abandon their Jewish culture and adopt Christianity. The Black Hebrew Israelites also believe that European Jews are not descended from the original Israelites, instead, Black Hebrew Israelites believe that European Jews are "impostors". For this reason, the group is frequently considered antisemitic. They are not recognized as Jews by any major Jewish organization and they are also not recognized by the modern State of Israel. Some Gogodala people of Papua New Guinea claim to be one of the lost tribes, adopting some Jewish customs to reflect this. Speculation regarding other ethnic groups There has been speculation regarding various ethnic groups, which would be regarded as fringe theories. Some writers have speculated that the Japanese people may be the direct descendants of some of the Ten Lost Tribes. Parfitt writes that "the spread of the fantasy of Israelite origin ... forms a consistent feature of the Western colonial enterprise. ... It is in fact in Japan that we can trace the most remarkable evolution in the Pacific of an imagined Judaic past. As elsewhere in the world, the theory that aspects of the country were to be explained via an Israelite model was introduced by Western agents.": 158 In 1878, Scottish immigrant to Japan Nicholas McLeod self-published Epitome of the Ancient History of Japan. McLeod drew correlations between his observations of Japan and the fulfillment of biblical prophecy: The civilized race of the Ainus,[sic: read Ainus] the Tokugawa and the Machi No Hito of the large towns, by dwelling in the tent or tabernacle shaped houses first erected by Jin Mu Tenno, have fulfilled Noah's prophecy regarding Japheth, "He shall dwell in the tents of Shem.": 7 Jon Entine emphasizes the fact that DNA evidence shows that there are no genetic links between Japanese and Israelite people.: 117 The Lemba people (Vhalemba) of Southern Africa claim to be the descendants of several Jewish men who traveled from what is now Yemen to Africa in search of gold, where they took wives and established new communities. They specifically adhere to religious practices which are similar to those which exist in Judaism and they also have a tradition of being a migrant people, with clues which point to their origin in either West Asia or North Africa. According to the oral history of the Lemba, their ancestors were Jews who came from a place called Sena several hundred years ago and settled in East Africa. Sena is an abandoned ancient town in Yemen, located in the eastern Hadramaut valley, which history indicates was inhabited by Jews in past centuries. Some research suggests that "Sena" may refer to Wadi Masilah (near Sayhut) in Yemen, often called Sena, or alternatively to the city of Sana'a, which is also located in Yemen.: 61 DNA tests have not supported their claims of a Jewish heritage. Some early Christian missionaries to New Zealand speculated that the native Māori were descendants of the Lost Tribes. Some Māori later embraced this belief. In 1650, an English minister named Thomas Thorowgood, who was a preacher in Norfolk, published a book entitled Jewes in America or Probabilities that the Americans are of that Race, which he had prepared for the New England missionary society. Parfitt writes of this work: "The society was active in trying to convert the Indians but suspected that they might be Jews and realized that it had better be prepared for an arduous task. Thorowgood's tract argued that the native populations of North America were descendants of the Ten Lost Tribes.": 66 In 1652 Hamon L'Estrange, an English author who wrote literary works about topics such as history and theology published an exegetical tract called Americans no Jews, or improbabilities that the Americans are of that Race in response to the tract by Thorowgood. In response to L'Estrange, in 1660, Thorowgood published a second edition of his book with a revised title and a foreword which was written by John Eliot, a Puritan missionary to the Indians who had translated the Bible into an Indian language.: 66, 76 The American diplomat and journalist Mordecai Manuel Noah also proposed the idea that the indigenous peoples of the Americas are descended from the Israelites in his publication The American Indians Being the Descendants of the Lost Tribes of Israel (1837). That some or all American Indians are part of the lost tribes is suggested by the Book of Mormon (1830) and it is also a popular belief among Latter-day Saints. The Swedish historian of the 18th century Olof von Dalin suggested that the Scythian tribe Neuri mentioned by Greek authors were descendants of the ten lost tribes of Israel; at the same time, he considered the Neuri to be the ancestors of the Finns, the Sami, and the Estonians. Adherents of British Israelism and Christian Identity both believe that the lost tribes migrated northward, over the Caucasus, and became the Scythians, Cimmerians and Goths, as well as the progenitors of the later Germanic invaders of Britain.: 26–27 The theory first arose in England and then it spread to the United States.: 52–65 During the 20th century, British Israelism was promoted by Herbert W. Armstrong, founder of the Worldwide Church of God.: 57 Tudor Parfitt, author of The Lost Tribes: The History of a Myth, states that the proof which is cited by adherents of British Israelism is "of a feeble composition even by the low standards of the genre",: 61 and these notions are widely rejected by historians. In literature In 1929 Yiddish-writing author Lazar Borodulin published one of the very few Yiddish science fiction novels, אויף יענער זייט סמבטיון : וויסענשאפטליכער און פאנטאסטישער ראמאן (Oyf yener zayt sambatyun, visnshaftlekher un fantastisher roman, in English: On the other side of the Sambation, a scientific and fantastic novel), a novel in the "lost world" genre, written in a Jewish perspective. In the novel a journalist meets a mad scientist with a ray gun in the land of the Red Jews. In a 1934 adventure novel by Ben Aronin, The Lost Tribe. Being the Strange Adventures of Raphael Drale in Search of the Lost Tribes of Israel, a teenager, Raphael, finds the lost tribe of Dan beyond the Arctic Circle. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Yale_University] | [TOKENS: 15153]
Contents Yale University Yale University is a private Ivy League research university in New Haven, Connecticut, United States. Founded in 1701, Yale is the third-oldest institution of higher education in the United States, and one of the nine colonial colleges chartered before the American Revolution. Yale was established as the Collegiate School in 1701 by Congregationalist clergy of the Connecticut Colony. Originally focused on theology and sacred languages, the school's curriculum expanded to include humanities and sciences by the time of the American Revolution. In the 19th century, the college expanded to include graduate and professional instruction, awarding the first PhD in the United States in 1861 and organizing as a university in 1887. Yale's faculty and student populations grew rapidly after 1890 due to the expansion of the physical campus and its scientific research programs. Yale is organized into fifteen constituent schools, including the original undergraduate college, the Yale Graduate School of Arts and Sciences, and Yale Law School. While the university is governed by the Yale Corporation, each school's faculty oversees its curriculum and degree programs. In addition to a central campus in downtown New Haven, the university owns athletic facilities in western New Haven, a campus in West Haven, and forests and nature preserves throughout New England. As of 2025[update], Yale's endowment was valued at $44.1 billion, making it the third-largest among all educational institutions and the second-largest among private universities. The Yale University Library, serving all constituent schools, holds more than 15 million volumes and is the third-largest academic library in the United States. Student athletes compete in intercollegiate sports as the Yale Bulldogs in the NCAA Division I Ivy League conference. As of October 2025, 72 Nobel laureates, 5 Fields medalists, 4 Abel Prize laureates, and 3 Turing Award winners have been affiliated with Yale University. Yale alumni include 5 U.S. presidents, 10 Founding Fathers, 19 U.S. Supreme Court justices, 31 living billionaires, 54 college founders and presidents, as well as numerous heads of state, cabinet members, and governors. The university’s community also includes hundreds of members of Congress, U.S. diplomats, 96 MacArthur Fellows, 263 Rhodes Scholars, 123 Marshall Scholars, 81 Gates Cambridge Scholars, 102 Guggenheim Fellows, and 9 Mitchell Scholars. Current Yale faculty include 73 members of the National Academy of Sciences, 55 members of the National Academy of Medicine, 8 members of the National Academy of Engineering, and 200 members of the American Academy of Arts and Sciences. Over 170 Yale alumni have competed in the Olympic games and have won over 110 medals. History Yale traces its beginnings to "An Act for Liberty to Erect a Collegiate School", a would-be charter passed in New Haven by the General Court of the Colony of Connecticut on October 9, 1701. The Act aimed to establish an institution for the education of ministers and lay leaders. Soon after, a group of ten Congregational ministers, Samuel Andrew, Thomas Buckingham, Israel Chauncy, Samuel Mather (nephew of Increase Mather), Rev. James Noyes II (son of James Noyes), James Pierpont, Abraham Pierson, Noadiah Russell, Joseph Webb, and Timothy Woodbridge, all Harvard alumni, met in the study of Reverend Samuel Russell, in Branford, to donate books to form the school's library. The group, led by James Pierpont, is now known as "The Founders". Known from its origin as the "Collegiate School", the institution opened in the home of its first rector, Abraham Pierson, who is considered Yale's first president. Pierson lived in Killingworth. The school moved to Saybrook in 1703, when the first treasurer of Yale, Nathaniel Lynde, donated land and a building. In 1716, it moved to New Haven. Meanwhile, there was a rift forming at Harvard between its sixth president, Increase Mather, and the rest of the Harvard clergy, whom Mather viewed as increasingly liberal, ecclesiastically lax, and overly broad in Church polity. The feud caused the Mathers to champion the Collegiate School in the hope it would maintain the Puritan religious orthodoxy in a way Harvard had not. Rev. Jason Haven, minister at the First Church and Parish in Dedham, Massachusetts, had been considered for the presidency on account of his orthodox theology and "Neatness dignity and purity of Style [which] surpass those of all that have been mentioned", but was passed over due to his "very Valetudinary and infirm State of Health". In 1718, at the behest of either Rector Samuel Andrew or the colony's Governor Gurdon Saltonstall, Cotton Mather contacted the Boston-born businessman Elihu Yale to ask for money to construct a new building for the college. Through the persuasion of Jeremiah Dummer, Yale, who had made a fortune in Madras while working for the East India Company as the first president of Fort St. George, donated nine bales of goods, which were sold for more than £560, a substantial sum of money. Cotton Mather suggested the school change its name to "Yale College". The name Yale is the Anglicized spelling of the Welsh name Iâl, which had been used for the family estate at Plas yn Iâl, near Llandegla, Wales. Meanwhile, a Harvard graduate working in England convinced 180 prominent intellectuals to donate books to Yale. The 1714 shipment of 500 books represented the best of modern English literature, science, philosophy and theology. It had a profound effect on intellectuals at Yale. Undergraduate Jonathan Edwards discovered John Locke's works and developed his "new divinity". In 1722, the rector and six friends, who had a study group to discuss the new ideas, announced they had given up Calvinism, become Arminians, and joined the Church of England. They were ordained in England and returned to the colonies as missionaries for the Anglican faith. Thomas Clapp became president in 1745, and while he attempted to return the college to Calvinist orthodoxy, did not close the library. Other students found Deist books in the library. Yale College undergraduates follow a liberal arts curriculum with departmental majors and is organized into a social system of residential colleges. Yale was swept up by the great intellectual movements of the period—the Great Awakening and Enlightenment—due to the religious and scientific interests of presidents Thomas Clap and Ezra Stiles. They were instrumental in developing the scientific curriculum while dealing with wars, student tumults, graffiti, "irrelevance" of curricula, desperate need for endowment and disagreements with the Connecticut legislature.[page needed] Serious American students of theology and divinity, particularly in New England, regarded Hebrew as a classical language, along with Greek and Latin, and essential for study of the Old Testament in the original. Reverend Stiles, president from 1778 to 1795, brought with him his interest in Hebrew as a vehicle for studying ancient Biblical texts in their original language, requiring all freshmen to study Hebrew (in contrast to Harvard, where only upperclassmen were required to study it) and is responsible for the Hebrew phrase אורים ותמים (Urim and Thummim) on the Yale seal. A 1746 graduate of Yale, Stiles came to the college with experience in education, having played an integral role in founding Brown University. Stiles' greatest challenge occurred in 1779 when British forces occupied New Haven and threatened to raze the college. However, Yale graduate Edmund Fanning, secretary to the British general in command of the occupation, intervened and the college was saved. In 1803, Fanning was granted an honorary degree LL.D. As the only college in Connecticut from 1701 to 1823, Yale educated the sons of the elite. Punishable offenses included cardplaying, tavern-going, destruction of college property, and acts of disobedience. Harvard was distinctive for the stability and maturity of its tutor corps, while Yale had youth and zeal. The emphasis on classics gave rise to private student societies, open only by invitation, which arose as forums for discussions of scholarship, literature and politics. The first were debating societies: Crotonia in 1738, Linonia in 1753 and Brothers in Unity in 1768. Linonia and Brothers in Unity continue to exist; commemorations to them can be found with names given to campus structures, like Brothers in Unity Courtyard in Branford College. The Yale Report of 1828 was a dogmatic defense of the Latin and Greek curriculum against critics who wanted more courses in modern languages, math and science. Unlike higher education in Europe, there was no national curriculum for U.S. colleges and universities. In the competition for students and financial support, college leaders strove to keep current with demands for innovation. At the same time, they realized a significant portion of students and prospective students demanded a classical background. The report meant the classics would not be abandoned. During this period, institutions experimented with changes in the curriculum, often resulting in a dual-track curriculum. In the decentralized environment of U.S. higher education, balancing change with tradition was a common challenge. A group of professors at Yale and New Haven Congregationalist ministers articulated a conservative response to the changes brought by Victorian culture. They concentrated on developing a person possessed of religious values strong enough to sufficiently resist temptations from within, yet flexible enough to adjust to the 'isms' (professionalism, materialism, individualism, and consumerism) tempting them from without.[page needed] William Graham Sumner, professor from 1872 to 1909, taught in the emerging disciplines of economics and sociology to overflowing classrooms. Sumner bested President Noah Porter, who disliked the social sciences and wanted Yale to lock into its traditions of classical education. Porter objected to Sumner's use of a textbook by Herbert Spencer that espoused agnostic materialism because it might harm students. Until 1887, the legal name of the university was "The President and Fellows of Yale College, in New Haven". In 1887, under an act passed by the Connecticut General Assembly, Yale was renamed "Yale University". The Revolutionary War soldier Nathan Hale (Yale 1773) was the archetype of the Yale ideal in the early 19th century: a manly yet aristocratic scholar, well-versed in knowledge and sports, and a patriot who "regretted" that he "had but one life to lose" for his country. Western painter Frederic Remington (Yale 1900) was an artist whose heroes gloried in the combat and tests of strength in the Wild West. The fictional, turn-of-the-20th-century Yale man Frank Merriwell embodied this same heroic ideal without racial prejudice, and his fictional successor Dink Stover in the novel Stover at Yale (1912) questioned the business mentality that had become prevalent at the school. Increasingly students turned to athletic stars as their heroes, especially since winning the big game became the goal of the student body, the alumni, and the team itself. Along with Harvard and Princeton, Yale students rejected British concepts about 'amateurism' and constructed athletic programs that were uniquely American. The Harvard–Yale football rivalry began in 1875. Between 1892, when Harvard and Yale met in one of the first intercollegiate debates, and in 1909 (year of the first Triangular Debate of Harvard/Yale/Princeton) the rhetoric, symbolism, and metaphors used in athletics were used to frame these debates. Debates were covered on front pages of college newspapers and emphasized in yearbooks, and team members received the equivalent of athletic letters for their jackets. There were rallies to send off teams to matches, but they never attained the broad appeal athletics enjoyed. One reason may be that debates do not have a clear winner, because scoring is subjective. With late 19th-century concerns about the impact of modern life on the body, athletics offered hope that neither the individual nor society was coming apart. In 1909–10, football faced a crisis resulting from the failure of the reforms of 1905–06, which sought to solve the problem of serious injuries. There was a mood of alarm and mistrust, and, while the crisis was developing, the presidents of Harvard, Yale, and Princeton developed a project to reform the sport and forestall possible radical changes forced by government. Presidents Arthur Hadley of Yale, A. Lawrence Lowell of Harvard, and Woodrow Wilson of Princeton worked to develop moderate reforms to reduce injuries. Their attempts, however, were reduced by rebellion against the rules committee and formation of the Intercollegiate Athletic Association. While the big three had attempted to operate independently of the majority, the changes pushed did reduce injuries. Starting with the addition of the Yale School of Medicine in 1810, the college expanded gradually, establishing the Yale Divinity School in 1822, Yale Law School in 1822, the Yale Graduate School of Arts and Sciences in 1847, the now-defunct Sheffield Scientific School in 1847,[a] and the Yale School of Fine Arts in 1869. In 1887, under the presidency of Timothy Dwight V, Yale College was renamed to Yale University, and the former name was applied only to the undergraduate college. The university would continue to expand into the 20th and 21st centuries, adding the Yale School of Music in 1894, the Yale School of Forestry & Environmental Studies in 1900, the Yale School of Public Health in 1915, the Yale School of Architecture in 1916, the Yale School of Nursing 1923, the Yale School of Drama in 1955, the Yale School of Management in 1976, and the Jackson School of Global Affairs in 2022. The Sheffield Scientific School would also reorganize its relationship with the university to teach only undergraduate courses. Expansion caused controversy about Yale's new roles. Noah Porter, a moral philosopher, was president from 1871 to 1886. During an age of expansion in higher education, Porter resisted the rise of the new research university, claiming an eager embrace of its ideals would corrupt undergraduate education. Historian George Levesque argues Porter was not a simple-minded reactionary, uncritically committed to tradition, but a principled and selective conservative.[page needed] Levesque says he did not endorse everything old or reject everything new; rather, he sought to apply long-established ethical and pedagogical principles to a changing culture. Levesque concludes, noting he may have misunderstood some of the challenges, but he correctly anticipated the enduring tensions that have accompanied the emergence of the modern university. Milton Winternitz led the Yale School of Medicine as its dean from 1920 to 1935. Dedicated to the new scientific medicine established in Germany, he was equally fervent about "social medicine" and the study of humans in their environment. He established the "Yale System" of teaching, with few lectures and fewer exams, and strengthened the full-time faculty system; he created the graduate-level Yale School of Nursing and the psychiatry department and built new buildings. Progress toward his plans for an Institute of Human Relations, envisioned as a refuge where social scientists would collaborate with biological scientists in a holistic study of humankind, lasted only a few years before resentful antisemitic colleagues drove him to resign. Before World War II, most elite university faculties counted among their numbers few, if any, Jews, blacks, women, or other minorities; Yale was no exception. By 1980, this condition had been altered dramatically, as numerous members of those groups held faculty positions. Almost all members of the Faculty of Arts and Sciences—and some members of other faculties—teach undergraduate courses, more than 2,000 of which are offered annually. In 1793, Lucinda Foote passed the entrance exams for Yale College, but was rejected by the president on the basis of her gender. Women studied at Yale from 1892, in graduate-level programs at the Yale Graduate School of Arts and Sciences. The first seven women to earn PhDs received their degrees in 1894: Elizabeth Deering Hanscom, Cornelia H. B. Rogers, Sara Bulkley Rogers, Margaretta Palmer, Mary Augusta Scott, Laura Johnson Wylie, and Charlotte Fitch Roberts. There is a portrait of them in Sterling Memorial Library, painted by Brenda Zlamany. In 1966, Yale began discussions with its sister school Vassar College about merging to foster coeducation at the undergraduate level. Vassar, then all-female and part of the Seven Sisters—elite higher education schools that served as sister institutions to the Ivy League when nearly all Ivy League institutions still only admitted men—tentatively accepted, but then declined the invitation. Both schools introduced coeducation independently in 1969. Amy Solomon was the first woman to register as a Yale undergraduate; she was the first woman at Yale to join an undergraduate society, St. Anthony Hall. The undergraduate class of 1973 was the first to have women starting from freshman year; all undergraduate women were housed in Vanderbilt Hall. A decade into co-education, student assault and harassment by faculty became the impetus for the trailblazing lawsuit Alexander v. Yale. In the 1970s, a group of students and a faculty member sued Yale for its failure to curtail sexual harassment, especially by male faculty. The case was partly built from a 1977 report authored by plaintiff Ann Olivarius, "A report to the Yale Corporation from the Yale Undergraduate Women's Caucus". This case was the first to use Title IX to argue and establish that sexual harassment of female students can be considered illegal sex discrimination. The plaintiffs were Olivarius, Ronni Alexander, Margery Reifler, Pamela Price, and Lisa E. Stone. They were joined by Yale classics professor John "Jack" J. Winkler. The lawsuit, brought partly by Catharine MacKinnon, alleged rape, fondling, and offers of higher grades for sex by faculty, including Keith Brion, professor of flute and director of bands, political science professor Raymond Duvall, English professor Michael Cooke, and coach of the field hockey team, Richard Kentwell. While unsuccessful in the courts, the legal reasoning changed the landscape of sex discrimination law and resulted in the establishment of Yale's Grievance Board and Women's Center. In 2011 a Title IX complaint was filed against Yale by students and graduates, including editors of Yale's feminist magazine Broad Recognition, alleging the university had a hostile sexual climate. In response, the university formed a Title IX steering committee to address complaints of sexual misconduct. Afterwards, universities and colleges throughout the U.S. also established sexual harassment grievance procedures. Yale instituted policies in the early 20th century designed to maintain the proportion of white Protestants from notable families in the student body (see numerus clausus) and eliminated such preferences, beginning with the class of 1970. In 2006, Yale and Peking University (PKU) established a Joint Undergraduate Program in Beijing, an exchange program allowing Yale students to spend a semester living and studying with PKU honor students. In July 2012, the Yale University-PKU Program ended due to weak participation. In 2007 outgoing Yale President Rick Levin characterized Yale's institutional priorities: "First, among the nation's finest research universities, Yale is distinctively committed to excellence in undergraduate education. Second, in our graduate and professional schools, as well as in Yale College, we are committed to the education of leaders." Also in 2007, the university purchased the 137-acre (0.55 km2) former Bayer campus in West Haven and Orange, Connecticut, including 17 buildings. The new Yale West Campus focuses on biotechnology, pharmaceuticals and other life sciences research. In 2009, former British Prime Minister Tony Blair picked Yale as one location—the others being Britain's Durham University and Universiti Teknologi Mara—for the Tony Blair Faith Foundation's United States Faith and Globalization Initiative. As of 2009, former Mexican President Ernesto Zedillo is the director of the Yale Center for the Study of Globalization and teaches an undergraduate seminar, "Debating Globalization". As of 2009, former presidential candidate and DNC chair Howard Dean teaches a residential college seminar, "Understanding Politics and Politicians". Also in 2009, an alliance was formed among Yale, University College London, and both schools' affiliated hospital complexes to conduct research focused on the direct improvement of patient care—a field known as translational medicine. President Richard Levin noted that Yale has hundreds of other partnerships across the world, but "no existing collaboration matches the scale of the new partnership with UCL". In August 2013, a new partnership with the National University of Singapore led to the opening of Yale-NUS College in Singapore, a joint effort to create a new liberal arts college in Asia featuring a curriculum including Western and Asian traditions. In 2017, having been suggested for decades, Yale University renamed Calhoun College, named for slave owner, anti-abolitionist, and white supremacist Vice President John C. Calhoun. It is now Hopper College, after Grace Hopper. In 2019, Yale was one of several universities involved in the 2019 college admissions bribery scandal. In 2020, in the wake of the George Floyd protests, the #CancelYale tag was used on social media to demand that Elihu Yale's name be removed from Yale University. Much of the support originated from right-wing pundits such as Mike Cernovich and Ann Coulter, who intended to satirize what they perceived as the excesses of cancel culture. Yale spent most of his professional career in the employ of the East India Company (EIC), serving as the governor of the Presidency of Fort St. George in modern-day Chennai. The EIC, including Yale himself, was involved in the Indian Ocean slave trade, though the extent of Yale's involvement in slavery remains debated. His singularly large donation led critics to argue Yale University relied on money derived from slavery for its first scholarships and endowments. In 2020, the U.S. Justice Department sued Yale for alleged discrimination against Asian and white candidates, through affirmative action admission policies. In 2021, under the new Biden administration, the Justice Department withdrew the lawsuit. The group, Students for Fair Admissions, later won a similar lawsuit against Harvard. In April 2024, Yale students joined other campuses across the United States in protests against the Gaza war. The student protestors demanded that Yale University divest from military weapons companies with ties to Israel's war on Gaza. Over 50 people were arrested at protests in and around Beinecke Plaza, and protests continued during the summer and in the new academic year starting September 2024. Undergraduate students "overwhelmingly" voted in a December referendum to call for divestment. In July 2025 Russian authorities declared Yale University to be an "undesirable" organization, banning its activities in the country. According to the Russian Prosecutor General’s Office, the institution’s activities are aimed at "violating the territorial integrity of Russia" and "destabilizing the socio-economic and political situation". The Boston Globe wrote in 2002 that "if there's one school that can lay claim to educating the nation's top national leaders over the past three decades, it's Yale". Yale alumni were represented on the Democratic or Republican ticket in every U.S. presidential election between 1972 and 2004. Yale-educated presidents since the end of the Vietnam War include Gerald Ford, George H. W. Bush, Bill Clinton, and George W. Bush, and major-party nominees include Hillary Clinton (2016), John Kerry (2004), Joseph Lieberman (vice president, 2000), and Sargent Shriver (vice president, 1972). Other alumni who have made serious bids for the presidency include Amy Klobuchar (2020), Tom Steyer (2020), Ben Carson (2016), Howard Dean (2004), Gary Hart (1984 and 1988), Paul Tsongas (1992), Pat Robertson (1988) and Jerry Brown (1976, 1980, 1992). Several explanations have been offered for Yale's representation since the end of the Vietnam War. Sources note the spirit of campus activism that has existed at Yale since the 1960s, and the intellectual influence of Reverend William Sloane Coffin on future candidates. Yale President Levin attributes the run to Yale's focus on creating "a laboratory for future leaders", an institutional priority that began during the tenure of Yale Presidents Alfred Whitney Griswold and Kingman Brewster. Richard H. Brodhead, former dean of Yale College and now president of Duke University, stated: "We do give very significant attention to orientation to the community in our admissions, and there is a very strong tradition of volunteerism at Yale". Yale historian Gaddis Smith notes "an ethos of organized activity" at Yale during the 20th century that led Kerry to lead the Yale Political Union's Liberal Party, George Pataki the Conservative Party, and Lieberman to manage the Yale Daily News. Camille Paglia points to a history of networking and elitism: "It has to do with a web of friendships and affiliations built up in school". CNN suggests that George W. Bush benefited from preferential admissions policies for the "son and grandson of alumni", and for a "member of a politically influential family". Elisabeth Bumiller and James Fallows credit the culture of community that exists between students, faculty, and administration, which downplays self-interest and reinforces commitment to others. During the 1988 presidential election, George H. W. Bush (Yale '48) derided Michael Dukakis for having "foreign-policy views born in Harvard Yard's boutique". When challenged on the distinction between Dukakis's Harvard connection and his Yale background, he said that, unlike Harvard, Yale's reputation was "so diffuse, there isn't a symbol, I don't think, in the Yale situation, any symbolism in it" and said Yale did not share Harvard's reputation for "liberalism and elitism". In 2004 Howard Dean stated, "In some ways, I consider myself separate from the other three (Yale) candidates of 2004. Yale changed so much between the class of '68 and the class of '71. My class was the first class to have women in it; it was the first class to have a significant effort to recruit African Americans. It was an extraordinary time, and in that span of time is the change of an entire generation". Administration and organization The President and Fellows of Yale College, also known as the Yale Corporation, or board of trustees, is the governing body of the university and consists of thirteen standing committees with separate responsibilities outlined in the by-laws. The corporation has 19 members: three ex officio members, ten successor trustees, and six elected alumni fellows. The university has three major academic components: Yale College (the undergraduate program), the Graduate School of Arts and Sciences, and the twelve professional schools. Yale's former president Richard C. Levin was, at the time, one of the highest paid university presidents in the United States with a 2008 salary of $1.5 million. Yale's succeeding president Peter Salovey ranks 40th with a 2020 salary of $1.16 million. The Yale Provost's Office and similar executive positions have launched several women into prominent university executive positions. In 1977, Provost Hanna Holborn Gray was appointed interim president of Yale and later went on to become president of the University of Chicago, being the first woman to hold either position at each respective school. In 1994, Provost Judith Rodin became the first permanent female president of an Ivy League institution at the University of Pennsylvania. In 2002, Provost Alison Richard became the vice-chancellor of the University of Cambridge. In 2003, the dean of the Divinity School, Rebecca Chopp, was appointed president of Colgate University and later went on to serve as the president of Swarthmore College in 2009, and then the first female chancellor of the University of Denver in 2014. In 2004, Provost Dr. Susan Hockfield became the president of the Massachusetts Institute of Technology. In 2004, Dean of the Nursing school, Catherine Gilliss, was appointed the dean of Duke University's School of Nursing and vice chancellor for nursing affairs. In 2007, Deputy Provost H. Kim Bottomly was named president of Wellesley College. Similar examples for men who have served in Yale leadership positions can also be found. In 2004, Dean of Yale College Richard H. Brodhead was appointed as the president of Duke University. In 2008, Provost Andrew Hamilton was confirmed to be the vice chancellor of the University of Oxford. Yale University staff are represented by several different unions. Clerical and technical workers are represented by Local 34, and service and maintenance workers are represented by Local 35, both of the same union affiliate UNITE HERE. Unlike similar institutions, Yale has consistently refused to recognize its graduate student union, Local 33 (another affiliate of UNITE HERE), citing claims that the union's elections were undemocratic and how graduate students are not employees; the move to not recognize the union has been criticized by the American Federation of Teachers. In addition, officers of the Yale University Police Department are represented by the Yale Police Benevolent Association, which affiliated in 2005 with the Connecticut Organization for Public Safety Employees. Yale security officers joined the International Union of Security, Police and Fire Professionals of America in late 2010, even though the Yale administration contested the election. In October 2014, after deliberation, Yale security decided to form a new union, the Yale University Security Officers Association, which has since represented the campus security officers. Yale has a history of difficult and prolonged labor negotiations, often culminating in strikes.[page needed] There have been at least eight strikes since 1968, and The New York Times wrote that Yale has a reputation as having the worst record of labor tension of any university in the U.S. Moreover, Yale has been accused by the AFL–CIO of failing to treat workers with respect, as well as not renewing contracts with professors over involvement in campus labor issues. Yale has responded to strikes with claims over mediocre union participation and the benefits of their contracts. Campus Yale's central campus in downtown New Haven covers 260 acres (1.1 km2) and comprises its main, historic campus and a medical campus adjacent to the Yale–New Haven Hospital. In western New Haven, the university holds 500 acres (2.0 km2) of athletic facilities, including the Yale Golf Course. In 2008, Yale purchased the 17-building, 136-acre (0.55 km2) former Bayer HealthCare complex in West Haven, Connecticut, the buildings of which are now used as laboratory and research space. Yale also owns seven forests in Connecticut, Vermont, and New Hampshire—the largest of which is the 7,840-acre (31.7 km2) Yale-Myers Forest in Connecticut's Quiet Corner—and nature preserves including Horse Island. Yale is noted for its largely Collegiate Gothic campus as well as several iconic modern buildings commonly discussed in architectural history survey courses: Louis Kahn's Yale Art Gallery and Center for British Art, Eero Saarinen's Ingalls Rink and Ezra Stiles and Morse Colleges, and Paul Rudolph's Art & Architecture Building. Yale also owns and has restored many noteworthy 19th-century mansions along Hillhouse Avenue, which was considered the most beautiful street in America by Charles Dickens when he visited the United States in the 1840s. In 2011, Travel + Leisure listed the Yale campus as one of the most beautiful in the United States. Many of Yale's buildings were constructed in the Collegiate Gothic architecture style from 1917 to 1931, financed largely by Edward S. Harkness, including the Yale Drama School. Stone sculpture built into the walls of the buildings portray contemporary college personalities, such as a writer, an athlete, a tea-drinking socialite, and a student who has fallen asleep while reading. Similarly, the decorative friezes on the buildings depict contemporary scenes, like a policemen chasing a robber and arresting a prostitute (on the wall of the Law School), or a student relaxing with a mug of beer and a cigarette. The architect, James Gamble Rogers, faux-aged these buildings by splashing the walls with acid, deliberately breaking their leaded glass windows and repairing them in the style of the Middle Ages, and creating niches for decorative statuary but leaving them empty to simulate loss or theft over the ages. In fact, the buildings merely simulate Middle Ages architecture, for though they appear to be constructed of solid stone blocks in the authentic manner, most actually have steel framing as was commonly used in 1930. One exception is Harkness Tower, 216 feet (66 m) tall, which was originally a free-standing stone structure. It was reinforced in 1964 to allow the installation of the Yale Memorial Carillon. Other examples of the Gothic style are on the Old Campus by architects like Henry Austin, Charles C. Haight and Russell Sturgis. Several are associated with members of the Vanderbilt family, including Vanderbilt Hall, Phelps Hall, St. Anthony Hall (a commission for member Frederick William Vanderbilt), the Mason, Sloane and Osborn laboratories, dormitories for the Sheffield Scientific School (the engineering and sciences school at Yale until 1956) and elements of Silliman College, the largest residential college. The oldest building on campus, Connecticut Hall (built in 1750), is in the Georgian style. Georgian-style buildings erected from 1929 to 1933 include Timothy Dwight College, Pierson College, and Davenport College, except the latter's east, York Street façade, which was constructed in the Gothic style to coordinate with adjacent structures. The Beinecke Rare Book and Manuscript Library, designed by Gordon Bunshaft of Skidmore, Owings & Merrill, is one of the largest buildings in the world reserved exclusively for the preservation of rare books and manuscripts. The library includes a six-story above-ground tower of book stacks, filled with 180,000 volumes, that is surrounded by large translucent Vermont marble panels and a steel and granite truss. The panels act as windows and subdue direct sunlight while also diffusing the light in warm hues throughout the interior. Near the library is a sunken courtyard with sculptures by Isamu Noguchi that are said to represent time (the pyramid), the sun (the circle), and chance (the cube). The library is located near the center of the university in Hewitt Quadrangle, which is now more commonly referred to as "Beinecke Plaza". Alumnus Eero Saarinen, Finnish-American architect of such notable structures as the Gateway Arch in St. Louis, Dulles International Airport Main Terminal, Bell Labs Holmdel Complex and the CBS Building in Manhattan, designed Ingalls Rink, dedicated in 1959, as well as the residential colleges Ezra Stiles and Morse. These latter were modeled after the medieval Italian hill town of San Gimignano—a prototype chosen for the town's pedestrian-friendly milieu and fortress-like stone towers. These tower forms at Yale act in counterpoint to the college's many Gothic spires and Georgian cupolas. The athletic field complex is partially in New Haven, and partially in West Haven. Notable nonresidential campus buildings and landmarks include Battell Chapel, Beinecke Rare Book Library, Harkness Tower, Humanities Quadrangle, Ingalls Rink, Kline Biology Tower, Osborne Memorial Laboratories, Payne Whitney Gymnasium, Peabody Museum of Natural History, Sterling Hall of Medicine, Sterling Law Buildings, Sterling Memorial Library, Woolsey Hall, Yale Center for British Art, Yale University Art Gallery, Yale Art & Architecture Building, and the Paul Mellon Centre for Studies in British Art in London. Yale's secret society buildings (some of which are called "tombs") were built to be private yet unmistakable. A diversity of architectural styles is represented: Berzelius, Donn Barber in an austere cube with classical detailing (erected in 1908 or 1910); Book and Snake, Louis R. Metcalfe in a Greek Ionic style (erected in 1901); Elihu, architect unknown but built in a Colonial style (constructed on an early 17th-century foundation although the building is from the 18th century); Mace and Chain, in a late colonial, early Victorian style (built in 1823). (Interior moulding is said to have belonged to Benedict Arnold); Manuscript Society, King-lui Wu with Dan Kiley responsible for landscaping and Josef Albers for the brickwork intaglio mural. Building constructed in a mid-century modern style; Scroll and Key, Richard Morris Hunt in a Moorish- or Islamic-inspired Beaux-Arts style (erected 1869–70); Skull and Bones, possibly Alexander Jackson Davis or Henry Austin in an Egypto-Doric style utilizing Brownstone (in 1856 the first wing was completed, in 1903 the second wing, 1911 the Neo-Gothic towers in rear garden were completed); St. Elmo, (former tomb) Kenneth M. Murchison, 1912, designs inspired by Elizabethan manor. Current location, brick colonial; and Wolf's Head, Bertram Grosvenor Goodhue, erected 1923–1924, Collegiate Gothic. Yale's Office of Sustainability develops and implements sustainability practices at Yale. Yale is committed to reduce its greenhouse gas emissions 10% below 1990 levels by 2020. As part of this commitment, the university allocates renewable energy credits to offset some of the energy used by residential colleges. Eleven campus buildings are candidates for LEED design and certification. Yale was listed as a Campus Sustainability Leader on the Sustainable Endowments Institute's College Sustainability Report Card 2008, and received a "B+" grade overall. Yale is a member of the Ivy Plus Sustainability Consortium, through which it has committed to best-practice sharing and the ongoing exchange of campus sustainability solutions along with other member institutions. Yale is the largest taxpayer and employer in the City of New Haven, and has often buoyed the city's economy and communities. Yale, however, has consistently opposed paying a tax on its academic property. Yale's Art Galleries, along with many other university resources, are free and openly accessible. Yale also funds the New Haven Promise program, paying full tuition for eligible students from New Haven public schools. Yale has a complicated relationship with its home city; for example, thousands of students volunteer every year in myriad community organizations, but city officials, who decry Yale's exemption from local property taxes, have long pressed the university to do more to help. Under President Levin, Yale has financially supported many of New Haven's efforts to reinvigorate the city. Evidence suggests that the town and gown relationships are mutually beneficial. Still, the economic power of the university increased dramatically with its financial success amid a decline in the local economy. Several campus safety strategies have been pioneered at Yale. The first campus police force was founded at Yale in 1894, when the university contracted city police officers to exclusively cover the campus. Later hired by the university, the officers were originally brought in to quell unrest between students and city residents and curb destructive student behavior. In addition to the Yale Police Department, a variety of safety services are available including blue phones, a safety escort, and 24-hour shuttle service. In the 1970s and 1980s, poverty and violent crime rose in New Haven, dampening Yale's student and faculty recruiting efforts. Between 1990 and 2006, New Haven's crime rate fell by half, helped by a community policing strategy by the New Haven Police and Yale's campus became the safest among peer schools. In 2004, the national non-profit watchdog group Security on Campus filed a complaint with the U.S. Department of Education, accusing Yale of under-reporting rape and sexual assaults. Academics Undergraduate admission to Yale College is considered "most selective" by U.S. News. In 2022, Yale accepted 2,234 students to the Class of 2026 out of 50,015 applicants, for an acceptance rate of 4.46%. 98% of students graduate within six years. Yale's most selective graduate schools are: the Law School (4%), School of Medicine (5%), Graduate School of Arts and Sciences (5.7% - only for PhD admissions), School of Art (6%), School of Music (10%), School of Architecture (15%), and Divinity School (15%). Yale Divinity School is notable for being the most selective institution for the study of religion in the world. Through its program of need-based financial aid, Yale commits to meet the full demonstrated financial need of all applicants, and the university is need-blind for both domestic and international applicants. Most financial aid is in the form of grants and scholarships that do not need to be paid back to the university, and the average need-based aid grant for the Class of 2017 was $46,395. 15% of Yale College students are expected to have no parental contribution, and about 50% receive some form of financial aid. About 16% of the Class of 2013 had some form of student loan debt at graduation, with an average debt of $13,000 among borrowers. For 2019, Yale ranked second in enrollment of recipients of the National Merit $2,500 Scholarship (140 scholars). Half of all Yale undergraduates are women, more than 39% are ethnic minority U.S. citizens (19% are underrepresented minorities), and 10.5% are international students. 55% attended public schools and 45% attended private, religious, or international schools, and 97% of students were in the top 10% of their high school class. Every year, Yale College also admits a small group of non-traditional students through the Eli Whitney Students Program. Yale University Library, which holds over 15 million volumes, is the second-largest university collection in the United States. The main library, Sterling Memorial Library, contains about 4 million volumes, and other holdings are dispersed at subject and location libraries. Rare books are found in several Yale collections. The Beinecke Rare Book Library has a large collection of rare books and manuscripts. The Harvey Cushing/John Hay Whitney Medical Library includes important historical medical texts, including an impressive collection of rare books, as well as historical medical instruments. The Lewis Walpole Library contains the largest collection of 18th‑century British literary works. The Elizabethan Club, technically a private organization, makes its Elizabethan folios and first editions available to qualified researchers through Yale. Yale's museum collections are also of international stature. The Yale University Art Gallery, the country's first university-affiliated art museum, contains more than 200,000 works, including Old Masters and important collections of modern art, in the Swartwout and Kahn buildings. The latter, Louis Kahn's first large-scale American work (1953), was renovated and reopened in December 2006. The Yale Center for British Art, the largest collection of British art outside of the UK, grew from a gift of Paul Mellon and is housed in another Kahn-designed building. The Peabody Museum of Natural History in New Haven is used by school children and contains research collections in anthropology, archaeology, and the natural environment. The Yale University Collection of Musical Instruments, affiliated with the Yale School of Music, is perhaps the least-known of Yale's collections because its hours of opening are restricted. The museums once housed the artifacts brought to the United States from Peru by Yale history professor Hiram Bingham in his Yale-financed expedition to Machu Picchu in 1912—when the removal of such artifacts was legal. The artifacts were restored to Peru in 2012. The U.S. News & World Report ranked Yale fifth among U.S. national universities for 2025. Yale University is accredited by the New England Commission of Higher Education. Internationally, Yale was ranked 11th in the 2025 Academic Ranking of World Universities. The university was ranked 21st in the QS World University Rankings 2026. Yale is a member of the Association of American Universities (AAU) and is classified among "R1: Doctoral Universities—Very high research activity". The National Science Foundation ranked Yale 15th among American universities for research and development expenditures in 2021 with $1.16 billion. Yale's current faculty include 67 members of the National Academy of Sciences, 55 members of the National Academy of Medicine, 8 members of the National Academy of Engineering, and 187 members of the American Academy of Arts and Sciences. The college is, after normalization for institution size, the tenth-largest baccalaureate source of doctoral degree recipients in the United States, and the largest such source within the Ivy League. It also is a top 10 (ranked seventh) baccalaureate source (after normalization for the number of graduates) of some of the most notable scientists (Nobel, Fields, Turing prizes, or membership in National Academy of Sciences, National Academy of Engineering, or National Academy of Medicine). Yale's English and Comparative Literature departments were part of the New Criticism movement. Of the New Critics, Robert Penn Warren, W.K. Wimsatt, and Cleanth Brooks were all Yale faculty. Later, the Yale Comparative literature department became a center of American deconstruction. Jacques Derrida, the father of deconstruction, taught at the department of comparative literature from the late 1970s to mid-1980s. Several other Yale faculty members were also associated with deconstruction, forming the so-called "Yale School". These included Paul de Man who taught in the Departments of Comparative Literature and French, J. Hillis Miller, Geoffrey Hartman (both taught in the Departments of English and Comparative Literature), and Harold Bloom (English), whose theoretical position was always somewhat specific, and who ultimately took a very different path from the rest of this group. Yale's history department has also originated important intellectual trends. Historians C. Vann Woodward and David Brion Davis are credited with beginning in the 1960s and 1970s an important stream of southern historians; likewise, David Montgomery, a labor historian, advised many of the current generation of labor historians in the country. Yale's Music School and department fostered the growth of Music Theory in the latter half of the 20th century. The Journal of Music Theory was founded there in 1957; Allen Forte and David Lewin were influential teachers and scholars. Since the late 1960s, Yale produces social sciences and policy research through its Yale Institution for Social and Policy Studies (ISPS). In addition to eminent faculty members, Yale research relies heavily on the presence of roughly 1200 Postdocs from various national and international origin working in the multiple laboratories in the sciences, social sciences, humanities, and professional schools of the university. The university progressively recognized this working force with the recent creation of the Office for Postdoctoral Affairs and the Yale Postdoctoral Association. Campus life Yale is a research university, with the majority of its students in the graduate and professional schools. Undergraduates, or Yale College students, come from a variety of ethnic, national, socioeconomic, and personal backgrounds. Of the 2010–2011 freshman class, 10% are non‑U.S. citizens, while 54% went to public high schools. The median family income of Yale students is $192,600, with 57% of students coming from the top 10% highest-earning families and 16% from the bottom 60%. Yale's residential college system was established in 1933 by Edward S. Harkness, who admired the social intimacy of Oxford and Cambridge and donated significant funds to found similar colleges at Yale and Harvard. Though Yale's colleges resemble their English precursors organizationally and architecturally, they are dependent entities of Yale College and have limited autonomy. The colleges are led by a head and an academic dean, who reside in the college, and university faculty and affiliates constitute each college's fellowship. Colleges offer their own seminars, social events, and speaking engagements known as "Master's Teas", but do not contain programs of study or academic departments. All other undergraduate courses are taught by the Faculty of Arts and Sciences and are open to members of any college. All undergraduates are members of a college, to which they are assigned before their freshman year, and 85 percent live in the college quadrangle or a college-affiliated dormitory. While the majority of upperclassman live in the colleges, most on-campus freshmen live on the Old Campus, the university's oldest precinct. While Harkness' original colleges were Georgian Revival or Collegiate Gothic in style, two colleges constructed in the 1960s, Morse and Ezra Stiles Colleges, have modernist designs. All twelve college quadrangles are organized around a courtyard, and each has a dining hall, courtyard, library, common room, and a range of student facilities. The twelve colleges are named for important alumni or significant places in university history. In 2017, the university opened two new colleges near Science Hill. Since the 1960s, John C. Calhoun's white supremacist beliefs and pro-slavery leadership had prompted calls to rename the college or remove its tributes to Calhoun. The racially motivated church shooting in Charleston, South Carolina, led to renewed calls in the summer of 2015 for Calhoun College, one of 12 residential colleges at the time, to be renamed. In July 2015 students signed a petition calling for the name change. They argued in the petition that—while Calhoun was respected in the 19th century as an "extraordinary American statesman"—he was "one of the most prolific defenders of slavery and white supremacy" in the history of the United States. In August 2015, Yale President Peter Salovey addressed the Freshman Class of 2019 in which he responded to the racial tensions but explained why the college would not be renamed. He described Calhoun as "a notable political theorist, a vice president to two different U.S. presidents, a secretary of war and of state, and a congressman and senator representing South Carolina". He acknowledged that Calhoun also "believed that the highest forms of civilization depend on involuntary servitude. Not only that, but he also believed that the races he thought to be inferior, black people in particular, ought to be subjected to it for the sake of their own best interests." Student activism about this issue increased in the fall of 2015, and included further protests sparked by controversy surrounding an administrator's comments on the potential positive and negative implications of students who wear Halloween costumes that are culturally sensitive. Campus-wide discussions expanded to include critical discussion of the experiences of women of color on campus, and the realities of racism in undergraduate life. The protests were sensationalized by the media and led to the labelling of some students as being members of Generation Snowflake. In April 2016, Salovey announced that "despite decades of vigorous alumni and student protests", Calhoun's name will remain on the Yale residential college explaining that it is preferable for Yale students to live in Calhoun's "shadow" so they will be "better prepared to rise to the challenges of the present and the future". He claimed that if they removed Calhoun's name, it would "obscure" his "legacy of slavery rather than addressing it". "Yale is part of that history" and "We cannot erase American history, but we can confront it, teach it and learn from it." One change that will be issued is the title of "master" for faculty members who serve as residential college leaders will be renamed to "head of college" due to its connotation of slavery. Despite this apparently conclusive reasoning, Salovey announced that Calhoun College would be renamed for groundbreaking computer scientist Grace Hopper in February 2017. This renaming decision received a range of responses from Yale students and alumni. In his 2019 book Assault on American Excellence, former Dean of Yale Law School Anthony T. Kronman criticized the title and name changes and the lack of support from Salovey for the Christakises, who were targeted by the student activists. Other members of the university community disagreed with Kronman's positions. In 2024, Yale had 526 registered undergraduate student organizations, plus hundreds of others for graduate students. The university hosts a variety of student journals, magazines, and newspapers. The Yale Literary Magazine, founded in February 1836, is the oldest student literary magazine in the United States. Established in 1872, The Yale Record is the world's oldest college humor magazine. Newspapers include the Yale Daily News, which was first published in 1878, and the weekly Yale Herald, which was first published in 1986. The Yale Journal of Medicine & Law is a biannual magazine that explores the intersection of law and medicine. Dwight Hall, an independent, non-profit community service organization, oversees more than 2,000 Yale undergraduates working on more than 70 community service initiatives in New Haven. The Yale College Council runs several agencies that oversee campus wide activities and student services. The Yale Dramatic Association and Bulldog Productions cater to the theater and film communities, respectively. In addition, the Yale Drama Coalition serves to coordinate between and provide resources for the various Sudler Fund sponsored theater productions which run each weekend. WYBC Yale Radio is the campus's radio station, owned and operated by students. While students used to broadcast on AM and FM frequencies, they now have an Internet-only stream. The Yale College Council (YCC) serves as the campus's undergraduate student government. All registered student organizations are regulated and funded by a subsidiary organization of the YCC, known as the Undergraduate Organizations Funding Committee (UOFC). The Graduate and Professional Student Senate (GPSS) serves as Yale's graduate and professional student government. The Yale Political Union (YPU) is a debate society founded in 1934 to host student discussions on a wide variety of topics. It is advised by alumni political leaders such as John Kerry and George Pataki. The Yale International Relations Association (YIRA) functions as the umbrella organization for the university's top-ranked Model UN team. YIRA also has a Europe-based offshoot, Yale Model Government Europe, other Model UN conferences such as YMUN, YMUN Korea, YMUN Taiwan and Yale Model African Union (YMAU), the Yale Review of International Studies (YRIS), and educational programs such as the Yale International Relations Leadership Institute and Hemispheres. The campus includes several fraternities and sororities. The campus features at least 18 a cappella groups, the most famous of which is The Whiffenpoofs, which from its founding in 1909 until 2018 was made up solely of senior men. The Elizabethan Club, a social club, has a membership of undergraduates, graduates, faculty and staff with literary or artistic interests. Membership is by invitation. Members and their guests may enter the "Lizzie's" premises for conversation and tea. The club owns first editions of a Shakespeare Folio, several Shakespeare Quartos, and a first edition of Milton's Paradise Lost, among other important literary texts. Yale's secret societies include Skull and Bones, Scroll and Key, Wolf's Head, Book and Snake, Elihu, Berzelius, St. Elmo, Manuscript, Brothers in Unity, Linonia, St. Anthony Hall, Shabtai, Myth and Sword, Daughters of Sovereign Government (DSG), Mace and Chain, ISO, Spade and Grave, and Sage and Chalice, among others. The two oldest existing honor societies are the Aurelian (1910) and the Torch Honor Society (1916). These are akin to Harvard finals clubs, Princeton eating clubs, and senior societies at University of Pennsylvania. Yale seniors at graduation smash clay pipes underfoot to symbolize passage from their "bright college years", though in recent history the pipes have been replaced with "bubble pipes". ("Bright College Years", the university's alma mater, was penned in 1881 by Henry Durand, Class of 1881, to the tune of Die Wacht am Rhein.) Yale's student tour guides tell visitors that students consider it good luck to rub the toe of the statue of Theodore Dwight Woolsey on Old Campus; however, actual students rarely do so. In the second half of the 20th century Bladderball, a campus-wide game played with a large inflatable ball, became a popular tradition but was banned by administration due to safety concerns. In spite of administration opposition, students revived the game in 2009, 2011, and 2014. Yale supports 35 varsity athletic teams that compete in the Ivy League Conference, the Eastern College Athletic Conference, and the New England Intercollegiate Sailing Association. Yale athletic teams compete intercollegiately at the NCAA Division I level. Like other members of the Ivy League, Yale does not offer athletic scholarships. Yale has numerous athletic facilities, including the Yale Bowl (the nation's first natural "bowl" stadium, and prototype for such stadiums as the Los Angeles Memorial Coliseum and the Rose Bowl), located at The Walter Camp Field athletic complex, and the Payne Whitney Gymnasium, the second-largest indoor athletic complex in the world. In 1970, the NCAA banned Yale from participating in all NCAA sports for two years, in reaction to Yale—against the wishes of the NCAA—playing its Jewish center Jack Langer in college games after Langer had played for Team United States at the 1969 Maccabiah Games in Israel with the approval of Yale President Kingman Brewster. The decision impacted 300 Yale students, every Yale student on its sports teams, over the next two years. In 2016, the men's basketball team won the Ivy League Championship title for the first time in 54 years, earning a spot in the NCAA Division I men's basketball tournament. In the first round of the tournament, the Bulldogs beat the Baylor Bears 79–75 in the school's first-ever tournament win. In May 2018, the men's lacrosse team defeated the Duke Blue Devils to claim their first-ever NCAA Division I Men's Lacrosse Championship, and were the first Ivy League school to win the title since the Princeton Tigers in 2001. Yale crew is the oldest collegiate athletic team in America, and won Olympic Games Gold Medal for men's eights in 1924 and 1956. Over 170 Yale alumni have competed in the Olympic games and have won over 110 medals. The Yale Corinthian Yacht Club, founded in 1881, is the oldest collegiate sailing club in the world. October 21, 2000, marked the dedication of Yale's fourth new boathouse in 157 years of collegiate rowing. The Gilder Boathouse is named to honor former Olympic rower Virginia Gilder '79 and her father Richard Gilder '54, who gave $4 million towards the $7.5 million project. Yale also maintains the Gales Ferry site where the heavyweight men's team trains for the Yale-Harvard Boat Race. In 1896, Yale and Johns Hopkins played the first known ice hockey game in the United States. Since 2006, the school's ice hockey clubs have played a commemorative game. Yale students claim to have invented Frisbee, by tossing empty Frisbie Pie Company tins. Yale athletics are supported by the Yale Precision Marching Band. "Precision" is used here ironically; the band is a scatter-style band that runs wildly between formations rather than actually marching. The band attends every home football game and many away, as well as most hockey and basketball games throughout the winter. Yale intramural sports are also a significant aspect of student life. Students compete for their respective residential colleges, fostering a friendly rivalry. The year is divided into fall, winter, and spring seasons, each of which includes about 10 different sports. About half the sports are coeducational. At the end of the year, the residential college with the most points (not all sports count equally) wins the Tyng Cup. Notable among the songs commonly played and sung at events such as commencement, convocation, alumni gatherings, and athletic games is the alma mater, "Bright College Years". Despite its popularity, "Boola Boola" is not the official fight song, albeit being the origin of the university's unofficial motto. The official Yale fight song, "Bulldog" was written by Cole Porter during his undergraduate days and is sung after touchdowns during a football game. Additionally, two other songs, "Down the Field" by C.W. O'Conner, and "Bingo Eli Yale", also by Cole Porter, are still sung at football games. According to College Fight Songs: An Annotated Anthology published in 1998, "Down the Field" ranks as the fourth-greatest fight song of all time. The school mascot is "Handsome Dan", the Yale bulldog, and the Yale fight song contains the refrain, "Bulldog, bulldog, bow wow wow". The school color, since 1894, is Yale Blue. Yale's Handsome Dan is believed to be the first college mascot in America, having been established in 1889. Yale has faced significant criticism for its handling of student mental health on campus. Suicidal and depressed students say that Yale forced them to medically withdraw rather than provide them with academic accommodations under the Americans with Disabilities Act, and in 2018 the Ruderman Family Foundation ranked Yale as having the worst mental health policies in the Ivy League. Dear Yale, I loved being here. I only wish I could've had some time. I needed time to work things out and to wait for new medication to kick in, but I couldn't do it in school, and I couldn't bear the thought of having to leave for a full year, or of leaving and never being readmitted. Love, Luchang. Students at Yale say that the university's policies force them to hide their depression and avoid seeking help, for fear of being forced to leave. One prominent case was the suicide of Luchang Wang in 2015, who died by suicide after making a Facebook post saying that she needed time to deal with her mental health issues, but could not deal with being forced to medically withdraw for an entire year with an uncertain chance of being readmitted. Wang had previously withdrawn from school due to mental health issues, and was afraid of being forced to withdraw again, as a second readmission attempt would be considerably more difficult for her. A friend of Wang said that she routinely lied to her university therapist to avoid being kicked out, and another student said that many at Yale lie to their counselors as "there's no clear standard established that says exactly what students will get involuntarily hospitalized or withdrawn for". In response, the university convened a commission to evaluate their readmission policies after a mental health withdrawal, renaming the process to "reinstatement", eliminating the $50 reapplication fee, and giving students 5–6 more days to make their decision on a mental health withdrawal. For students that do seek help, waitlists for therapy can be months long, with individual counselling sessions only 30 minutes in length. In 2022, after a Washington Post article about their medical withdrawal policies, the school increased the number of mental health clinicians on campus from 51 to 60 as well as promised further changes. In 2023, after a lawsuit was filed against the school for what the plaintiffs described as discrimination, the university changed the name of a "medical withdrawal" to a "medical leave of absence" saying that the "leave of absence" terminology would allow students to remain on Yale's insurance while away from the school. The new policy also allowed for students on a leave of absence to participate in extracurricular clubs and visit campus, something a student on medical withdrawal was banned from doing. A representative of Yale also said that the criticism of their policies "misrepresents our efforts and unwavering commitment to supporting our students, whose well-being and success are our primary focus" and that "the mental health of our students is a very, very high priority". After the death of undergraduate student Rachael Shaw Rosenbaum by suicide, an organization called Elis for Rachael was formed, advocating for mental health-related reforms. The group has sued Yale, demanding changes. Notable people Yale has had many financial supporters, but some stand out by the magnitude or timeliness of their contributions. Among those who have made large donations commemorated at the university are: Elihu Yale, Jeremiah Dummer, the Vanderbilt family, the Harkness family (Edward, Anna, and William), the Beinecke family (Edwin, Frederick, and Walter), John William Sterling, Payne Whitney, Joseph Earl Sheffield, Paul Mellon, Charles B. G. Murphy, Joseph Tsai, William K. Lanman, and Stephen Schwarzman. The Yale Class of 1954, led by Richard Gilder, donated $70 million in commemoration of their 50th reunion. Charles B. Johnson, a 1954 graduate of Yale College, pledged a $250 million gift in 2013 to support the construction of two new residential colleges. The colleges have been named respectively in honor of Pauli Murray and Benjamin Franklin. A $100 million contribution by Stephen Adams enabled the Yale School of Music to become tuition-free and the Adams Center for Musical Arts to be built, while a $150 million contribution by David Geffen enabled the Yale School of Drama (renamed the David Geffen School of Drama at Yale) to become tuition-free as well. Yale has produced many distinguished alumni in various fields, in both the public and private sectors. According to 2020 data, around 71% of undergraduates join the workforce, while 17% attend graduate or professional schools. Yale graduates have been recipients of 263 Rhodes Scholarships, 123 Marshall Scholarships, 67 Truman Scholarships, 21 Churchill Scholarships, and 9 Mitchell Scholarships. The university is the 2nd largest producer of Fulbright Scholars, with 1,244 in its history and 89 MacArthur Fellows. The U.S. Department of State Bureau of Educational and Cultural Affairs ranked Yale fifth among research institutions producing the most 2020–2021 Fulbright Scholars. 31 living billionaires are alumni. One of the most popular undergraduate majors is political science, with many going on to serve in government and politics. Former presidents who attended for undergrad include William Howard Taft, George H. W. Bush, and George W. Bush, while former presidents Gerald Ford and Bill Clinton attended Yale Law School. Among vice presidents, influential antebellum era politician John C. Calhoun graduated from Yale, while JD Vance graduated from Yale Law School. Former world leaders include Italian prime minister Mario Monti, Turkish prime minister Tansu Çiller, South Korean prime minister Lee Hong-koo, Mexican president Ernesto Zedillo, German president Karl Carstens, Philippine president José Paciano Laurel, Latvian president Valdis Zatlers, Taiwanese premier Jiang Yi-huah, and Malawian president Peter Mutharika, among others. Prominent royals who graduated are Crown Princess Victoria of Sweden and Olympia Bonaparte, Princess Napoléon. Alumni have had considerable presence in U.S. government in all three branches. On the U.S. Supreme Court, 19 justices have been alumni, including current Associate Justices Sonia Sotomayor, Samuel Alito, Clarence Thomas, and Brett Kavanaugh. Alumni have been U.S. Senators, including current senators Michael Bennet, Richard Blumenthal, Cory Booker, Sherrod Brown, Chris Coons, Amy Klobuchar, Sheldon Whitehouse. Current and former cabinet members include Secretaries of State John Kerry, Hillary Clinton, Cyrus Vance, and Dean Acheson; U.S. Secretaries of the Treasury Oliver Wolcott, Robert Rubin, Nicholas F. Brady, Steven Mnuchin, Janet Yellen, and Scott Bessent; U.S. Attorneys General Nicholas Katzenbach, Edwin Meese, John Ashcroft, and Edward H. Levi; and many others. Peace Corps founder and American diplomat Sargent Shriver and public official and urban planner Robert Moses are Yale alumni. Yale has produced numerous award-winning authors and influential writers, like Nobel Prize in Literature laureate Sinclair Lewis and Pulitzer Prize winners Stephen Vincent Benét, Thornton Wilder, Doug Wright, and David McCullough. Academy Award winning actors, actresses, and directors include Jodie Foster, Paul Newman, Meryl Streep, Elia Kazan, George Roy Hill, Lupita Nyong'o, Oliver Stone, and Frances McDormand. Alumni from Yale have also made notable contributions to both music and the arts. Leading American composer from the 20th century Charles Ives, Broadway composer Cole Porter, Grammy award winner David Lang, multi-Tony Award winner Composer and Musicologist Maury Yeston, and award-winning jazz pianist and composer Vijay Iyer all hail from Yale. Hugo Boss Prize winner Matthew Barney, famed American sculptor Richard Serra, President Barack Obama presidential portrait painter Kehinde Wiley, MacArthur Fellows and contemporary artists Tschabalala Self, Titus Kaphar, Richard Whitten, and Sarah Sze, Pulitzer Prize winning cartoonist Garry Trudeau, and National Medal of Arts photorealist painter Chuck Close all graduated from Yale. Additional alumni include architect and Presidential Medal of Freedom winner Maya Lin, Pritzker Prize winner Norman Foster, and Gateway Arch designer Eero Saarinen. Journalists and pundits include Dick Cavett, Chris Cuomo, Anderson Cooper, William F. Buckley Jr., Blake Hounshell, Nia-Malika Henderson and Fareed Zakaria. In business, Yale has had numerous alumni and former students go on to become founders of influential business, like William Boeing (Boeing, United Airlines), Briton Hadden and Henry Luce (Time Magazine), Stephen A. Schwarzman (Blackstone Group), Frederick W. Smith (FedEx), Juan Trippe (Pan Am), Harold Stanley (Morgan Stanley), Bing Gordon (Electronic Arts), and Ben Silbermann (Pinterest). Other business people from Yale include former chairman and CEO of Sears Holdings Edward Lampert, former Time Warner president Jeffrey Bewkes, former PepsiCo chairperson and CEO Indra Nooyi, sports agent Donald Dell, and investor/philanthropist Sir John Templeton, Alumni distinguished in academia include literary critic and historian Henry Louis Gates, economists Irving Fischer, Mahbub ul Haq, and Nobel Prize laureate Paul Krugman; Nobel Prize in Physics laureates Ernest Lawrence and Murray Gell-Mann; Fields Medalist John G. Thompson; Human Genome Project leader and National Institutes of Health director Francis S. Collins; brain surgery pioneer Harvey Cushing; pioneering computer scientist Grace Hopper; influential mathematician and chemist Josiah Willard Gibbs; National Women's Hall of Fame inductee and biochemist Florence B. Seibert; Turing Award recipient Ron Rivest; inventors Samuel F.B. Morse and Eli Whitney; Nobel Prize in Chemistry laureate John B. Goodenough; lexicographer Noah Webster; and theologians Jonathan Edwards and Reinhold Niebuhr. In the sporting arena, alumni include baseball players Ron Darling and Craig Breslow who in the major leagues played with fellow Yale alum Ryan Lavarnway and baseball executives Theo Epstein and George Weiss; basketball players Chris Dudley, Tony Lavelli, Miye Oni and Danny Wolf, football players Calvin Hill, Gary Fenick, Amos Alonzo Stagg, and "the Father of American Football" Walter Camp; ice hockey players Chris Higgins and Olympian Helen Resor; Olympic figure skating champions Sarah Hughes and Nathan Chen; nine-time U.S. Squash men's champion Julian Illingworth; Olympic swimmer Don Schollander; Olympic rowers Josh West and Rusty Wailes; Olympic sailor Stuart McNay; Olympic runner Frank Shorter; and others. See also In fiction and popular culture Yale University is a cultural referent as an institution that produces some of the most elite members of society and its grounds, alumni, and students have been prominently portrayed in fiction and U.S. popular culture. For example, Owen Johnson's novel Stover at Yale follows the college career of Dink Stover, and Frank Merriwell, the model for all later juvenile sports fiction, plays football, baseball, crew, and track at Yale while solving mysteries and righting wrongs. Yale University also is mentioned in F. Scott Fitzgerald's novel The Great Gatsby. The narrator, Nick Carraway, wrote a series of editorials for the Yale News, and Tom Buchanan was "one of the most powerful ends that ever played football" for Yale. Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Iran%E2%80%93United_States_relations] | [TOKENS: 9457]
Contents Iran–United States relations Relations between Iran and the United States in modern day are turbulent and have a troubled history. They began in the mid-to-late 19th century, when Iran was known to the Western world as Qajar Persia. Persia was very wary of British and Russian colonial interests during the Great Game. By contrast, the United States was seen as a more trustworthy foreign power, and the Americans Arthur Millspaugh and Morgan Shuster were even appointed treasurers-general by the Shahs of the time. During World War II, Iran was invaded by the United Kingdom and the Soviet Union, both US allies, but relations continued to be positive after the war until the later years of the government of Mohammad Mosaddegh, who was overthrown by a coup organized by the Central Intelligence Agency and aided by MI6. This was followed by an era of close alliance between Shah Mohammad Reza Pahlavi's authoritarian regime and the US government, Iran being one of the US's closest allies during the Cold War, which was in turn followed by a dramatic reversal and disagreement between the two countries after the 1979 Iranian Revolution. The two nations have had no formal diplomatic relations since April 7, 1980. Instead, Pakistan serves as Iran's protecting power in the United States, while Switzerland serves as the United States' protecting power in Iran. Contacts are carried out through the Iranian Interests Section of the Pakistani Embassy in Washington, D.C., and the US Interests Section of the Swiss Embassy in Tehran. In August 2018, Supreme Leader of Iran Ali Khamenei banned direct talks with the United States. According to the US Department of Justice, Iran has since attempted to assassinate US officials and dissidents, including US president Donald Trump. Iranian explanations for the animosity with the United States include "the natural and unavoidable conflict between the Islamic system" and "such an oppressive power as the United States, which is trying to establish a global dictatorship and further its own interests by dominating other nations and trampling on their rights", as well as the United States support for Israel ("the Zionist entity"). In the West, however, different explanations have been considered, including the Iranian government's need for an external bogeyman to furnish a pretext for domestic repression against pro-democratic forces and to bind the government to its loyal constituency. The United States attributes the worsening of relations to the 1979–81 Iran hostage crisis, Iran's repeated human rights abuses since the Islamic Revolution, different restrictions on using spy methods on democratic revolutions by the US, its anti-Western ideology, its nuclear program, and its funding of terrorist organizations like Hamas (Gaza), Hezbollah (Lebanon), and the Houthis (Yemen) as proxies. Since 1995, the United States has had an embargo on trade with Iran. In 2015, the United States led successful negotiations for a nuclear deal (the Joint Comprehensive Plan of Action) intended to place substantial limits on Iran's nuclear program, including IAEA inspections and limitations on enrichment levels. In 2016, most sanctions against Iran were lifted. The Trump administration unilaterally withdrew from the nuclear deal and re-imposed sanctions in 2018, initiating what became known as the "maximum pressure campaign" against Iran. In response, Iran gradually reduced its commitments under the nuclear deal and eventually exceeded pre-JCPOA enrichment levels. According to a 2013 BBC World Service poll, 5% of Americans view Iranian influence positively, with 87% expressing a negative view, the most unfavorable perception of Iran in the world. On the other hand, research has shown that most Iranians hold a positive attitude about the American people, though not the US government. According to a 2019 survey by IranPoll, 13% of Iranians have a favorable view of the United States, with 86% expressing an unfavourable view, the most unfavorable perception of the United States in the world. According to a 2018 Pew poll, 39% of Americans say that limiting the power and influence of Iran should be a top foreign policy priority. Relations tend to improve when the two countries have overlapping goals, such as repelling Sunni militants during the Iraq War and the intervention against the Islamic State in the region. History American newspapers in the 1720s were uniformly pro-Iranian, especially during the revolt of Afghan emir Mahmud Hotak (r. 1722–1725) against the Safavid dynasty. Political relations between Qajar Persia and the United States began when the Shah of Iran, Nassereddin Shah Qajar, officially dispatched Iran's first ambassador, Mirza Abolhasan, to Washington, D.C. in 1856. In 1883, Samuel G. W. Benjamin was appointed by the United States as the first official diplomatic envoy to Iran; however, ambassadorial relations were not established until 1944. The US had little interest in Persian affairs, while the US as a trustworthy outsider did not suffer. The Persians again sought the US for help in straightening out its finances after World War I. This mission was opposed by powerful vested interests and eventually was withdrawn with its task incomplete. During the Persian Constitutional Revolution in 1909, American Howard Baskerville died in Tabriz while fighting with a militia in a battle against royalist forces. After the Iranian parliament appointed United States financier Morgan Shuster as Treasurer General of Iran in 1911, an American was killed in Tehran by gunmen thought to be affiliated with Russian or British interests. Shuster became even more active in supporting the Constitutional Revolution of Iran financially. Arthur Millspaugh, an American economic adviser, was sent to Persia in 1923 as a private citizen to help reform its inefficient administration. His presence was seen by Persians as a means to attract foreign investment and counterbalance European influence. The mission ended in 1928 after Millspaugh fell out of favor with the shah. Until World War II, relations between Iran and the United States remained cordial. As a result, many Iranians sympathetic to the Persian Constitutional Revolution came to view the US as a "third force" in their struggle to expel British and Russian dominance in Persian affairs. American industrial and business leaders were supportive of Iran's drive to modernize its economy and to expel British and Russian influence from the country. Reza Khan, a military officer in Persia’s Cossack Brigade, came to power after leading a British-backed coup in 1921 that overthrew the Qajar Dynasty. He later declared himself shah (king) and took the name Reza Pahlavi. He launched modernization efforts, building a national railway and introducing secular education, while also censoring the press, suppressing unions and parties, and later banning the hijab in favor of Western dress. In 1936, Iran withdrew its ambassador in Washington for nearly one year after the publication of an article criticizing Reza Shah in the New York Daily Herald. After a dispute over German influence during World War II, Reza Shah was forced to abdicate in 1941 and was succeeded by his son, Mohammad Reza Pahlavi. During the rest of World War II, Iran became a major conduit for British and American aid to the Soviet Union and an avenue through which over 120,000 Polish refugees and Polish Armed Forces fled the Axis advance. At the 1943 Tehran Conference, the Allied "Big Three"—Joseph Stalin, Franklin D. Roosevelt, and Winston Churchill—issued the Tehran Declaration to guarantee the post-war independence and boundaries of Iran. In 1949, the Constituent Assembly of Iran gave the shah the power to dissolve the parliament. Until the outbreak of World War II, the United States had no active policy toward Iran. When the Cold War began, the United States was alarmed by the attempt by the Soviet Union to set up separatist states in Iranian Azerbaijan and Kurdistan, as well as its demand for military rights to the Dardanelles in 1946. This fear was enhanced by the loss of China to communism, the uncovering of Soviet spy rings, and the start of the Korean War. In 1951, Iran nationalised its oil industry, effectively seizing the assets of the Anglo-Iranian Oil Company (AIOC). On April 28, 1951, Mohammad Mosaddegh was elected as Prime Minister by the Parliament of Iran. The British planned to retaliate by attacking Iran, but U.S. President Harry S. Truman pressed Britain to moderate its position in the negotiations and to not invade Iran. American policies fostered a sense in Iran that the United States supported Mossadeq, along with optimism that the oil dispute would soon be resolved through "a series of innovative proposals" that would provide Iran with "significant amounts of economic aid." Mossadeq visited Washington, and the American government made "frequent statements expressing support for him." At the same time, the United States honored the British embargo and, without Truman's knowledge, the Central Intelligence Agency station in Tehran had been "carrying out covert activities" against Mosaddeq and the National Front "at least since the summer of 1952". In 1953, the U.S. and Britain orchestrated a coup to overthrow Iran’s prime minister Mohammad Mosaddeq, fearing communist influence and economic instability after Iran nationalized its oil industry. The coup, led by the CIA and MI6, initially failed but succeeded on a second attempt. It reinstalled Shah Mohammad Reza Pahlavi, who then received substantial U.S. financial and military support. The U.S. helped establish SAVAK, the Shah’s brutal secret police, to maintain his rule. Many liberal Iranians believe that the coup and the subsequent U.S. support for the shah enabled the Shah's arbitrary rule, contributing to the "deeply anti-American character" of the 1979 revolution later. After the coup, the United States played a central role in reorganizing Iran’s oil sector. Under U.S. pressure, BP joined a consortium of Western companies to resume Iranian oil exports. The consortium operated on behalf of the state-owned National Iranian Oil Company (NIOC), which retained formal ownership of Iran’s oil and infrastructure. While Iran received 50% of the profits, U.S. companies collectively secured 40% of the remaining share. However, the consortium maintained operational control, barred Iranian oversight of its financial records, and excluded Iranians from its board. The agreement was part of a wider transition from British to American dominance in the region and worldwide. While initially seen as a Cold War success, the coup later became a source of deep resentment, with critics calling it a blow to democracy and a lasting stain on U.S.-Iran relations. Iran's nuclear program was launched as part of the Atoms for Peace program that was announced by U.S. president Eisenhower in 1953. The U.S. helped Iran create its nuclear program in 1957 by providing Iran its first nuclear reactor and nuclear fuel, and after 1967 by providing Iran with weapons grade enriched uranium. Iran is among the 51 original signatories of the Treaty on the Non-Proliferation of Nuclear Weapons (NPT) on July 1, 1968, and its Parliament ratifies the treaty in February 1970. The participation of the U.S. and Western European governments continued until the 1979 Iranian Revolution. Iran's border with the Soviet Union, and its position as the largest, most powerful country in the oil-rich Persian Gulf, made Iran a "pillar" of US foreign policy in the Middle East. In 1960, Iran joined four other countries to form the Organization of the Petroleum Exporting Countries (OPEC), aiming to challenge the dominance of Western oil companies and reclaim control over national oil resources. In the 1960s and 1970s, Iran's oil revenues grew considerably. Beginning in the mid-1960s, this development "weakened U.S. influence in Iranian politics" while strengthening the Iranian state's power over its own population.[citation needed] By the 1970s, surging OPEC profits gave the group substantial leverage over Western economies and elevated Iran’s strategic value as a U.S. ally. According to scholar Homa Katouzian, this put the United States "in the contradictory position of being regarded" by the Iranian public "as the chief architect and instructor of the regime," while "its real influence" in domestic Iranian politics and policies "declined considerably". James Bill and other historians have said that between 1969 and 1974 U.S. President Richard Nixon actively recruited the Shah as an American puppet and proxy. However, Richard Alvandi argues that it worked the other way around, with the Shah taking the initiative. President Nixon, who had first met the Shah in 1953, regarded him as a westernizing anticommunist statesman who deserved American support now that the British were withdrawing from the region. They met again in 1972 and the Shah agreed to buy large quantities of American military hardware, and took responsibility for ensuring political stability and fighting off Soviet subversion throughout the region. Permitting Iran to purchase U.S. arms served Cold War objectives by securing the Shah’s alignment with Washington after Iran had briefly explored Soviet alternatives in the 1960s, while also benefiting the American economy. However, because of the 1973 Arab-Israeli War and the subsequent Arab oil embargo against the United States, oil prices became very high. This enabled the shah to buy more advanced weaponry than U.S. officials had expected, which caused concern in Washington. In the 1970s, approximately 25,000 American technicians were deployed to Iran to maintain military equipment (such as F-14s) that had been sold to the Shah's government. Cultural relations between the two countries remained cordial until 1979. Pahlavi University, Sharif University of Technology, and Isfahan University of Technology, three of Iran's top academic universities, were directly modeled on private American institutions such as the University of Chicago, MIT, and the University of Pennsylvania. The Shah was generous in awarding American universities with financial gifts. For example, the University of Southern California received an endowed chair of petroleum engineering, and a million dollar donation was given to the George Washington University to create an Iranian Studies program. Prior to the Iranian Revolution of 1979, many Iranian citizens, especially students, resided in the United States and had a positive and welcoming attitude toward America and Americans. From 1950 to 1979, an estimated 800,000 to 850,000 Americans had visited or lived in Iran, and had often expressed their admiration for the Iranian people. Under president Gerald Ford, U.S.–Iran relations began to cool. Unlike Richard Nixon, Ford lacked a close personal relationship with the Shah, and his administration took a more cautious stance on nuclear cooperation. Negotiations over American nuclear exports to Iran faltered as Ford insisted on additional safeguards beyond those required by the Nuclear Non-Proliferation Treaty, which the Shah rejected as an infringement on Iran’s sovereignty. In 1975, President Ford approved a plan allowing Iran to process U.S. nuclear materials and purchase a plutonium reprocessing facility. Henry Kissinger later described it as a commercial deal with an ally, with no discussion of weapons concerns. Despite continuing Nixon’s arms sales policy, granting Iran wide access to U.S. weapons, Ford faced internal opposition and growing concerns in Congress. Meanwhile, U.S. officials—particularly Treasury Secretary William Simon grew increasingly critical of the Shah's role in maintaining high oil prices, during a time when surging inflation was driving the U.S. economy toward a recession. In 1976, the U.S. covertly supported Saudi Arabia’s move to drive down oil prices, undercutting Iran’s revenues. The resulting oil price collapse in early 1977 triggered a severe financial crisis in Iran, forcing austerity measures that led to rising unemployment and social unrest. These developments significantly weakened the Shah’s regime and contributed to the conditions that precipitated the 1979 Islamic Revolution. Later declassified documents suggest that key U.S. policymakers underestimated the risks of destabilizing their long-time ally. In the late 1970s, American President Jimmy Carter emphasized human rights in his foreign policy, but went easy in private with the Shah. By 1977, Iran had garnered unfavorable publicity in the international community for its bad human rights record. That year, the Shah responded to Carter's "polite reminder" by granting amnesty to some prisoners and allowing the Red Cross to visit prisons. Through 1977, liberal opposition formed organizations and issued open letters denouncing the Shah's regime. Carter angered anti-Shah Iranians with a New Year's Eve 1978 toast to the Shah in which he said: Under the Shah's brilliant leadership Iran is an island of stability in one of the most troublesome regions of the world. There is no other state figure whom I could appreciate and like more. Observers disagree over the nature of United States policy toward Iran under Carter as the Shah's regime crumbled. According to historian Nikki Keddie, the Carter administration followed "no clear policy" on Iran. The US National Security Advisor Zbigniew Brzezinski "repeatedly assured Pahlavi that the U.S. backed him fully". At the same time, officials in the State Department believed the revolution was unstoppable. After visiting the Shah in 1978, Secretary of the Treasury W. Michael Blumenthal complained of the Shah's emotional collapse. Brzezinski and Energy Secretary James Schlesinger were adamant in assurances that the Shah would receive military support. Sociologist Charles Kurzman argues that the Carter administration was consistently supportive of the Shah and urged the Iranian military to stage a "last-resort coup d'état". The Iranian/Islamic Revolution (1978–1979) ousted the Shah and replaced him with the anti-American Supreme Leader Ayatollah Ruhollah Khomeini. The United States government State Department and intelligence services "consistently underestimated the magnitude and long-term implications of this unrest". Six months before the revolution culminated, the CIA had produced a report stating that "Iran is not in a revolutionary or even a 'prerevolutionary' situation." Revolutionary students feared the power of the United States, particularly the CIA, to overthrow a new Iranian government. One source of this concern was a book by CIA agent Kermit Roosevelt Jr. titled Countercoup: The Struggle for Control of Iran. Many students had read excerpts from the book and thought that the CIA would attempt to implement this countercoup strategy. Khomeini referred to America as the "Great Satan" and instantly got rid of the Shah's prime minister, replacing him with politician Mehdi Bazargan. Until this point, the Carter administration was still hoping for normal relationships with Iran, sending its National Security Adviser Zbigniew Brzezinski. The Islamic revolutionaries wished to extradite and execute the ousted Shah, and Carter refused to give him any further support or help return him to power. The Shah, suffering from terminal cancer, requested entry into the United States for treatment. The American embassy in Tehran opposed the request, as they were intent on stabilizing relations between the new interim revolutionary government of Iran and the United States. However, President Carter agreed to let the Shah in, after pressure from Henry Kissinger, Nelson Rockefeller and other pro-Shah political figures. Iranians' suspicion that the Shah was actually trying to conspire against the Iranian Revolution grew; thus, this incident was often used by the Iranian revolutionaries to justify their claims that the former monarch was an American puppet, and this led to the storming of the American embassy by radical students allied with Khomeini. On November 4, 1979, Iranian student revolutionaries, with Ayatollah Khomeini’s approval, seized the U.S. embassy in Tehran, holding 52 American diplomats hostage for 444 days in response to the U.S. granting asylum to the deposed Shah. The crisis, seen in Iran as a stand against American influence and in the U.S. as a violation of diplomatic law, led to failed rescue attempts and lasting damage to Iran-U.S. relations. Six Americans escaped via the CIA-Canadian "Canadian Caper" operation, later dramatized in the film Argo. As a response to the seizure of the embassy, Carter banned Iranian oil imports, followed by his Executive Order 12170 froze about $12 billion in Iranian assets, including bank deposits, gold and other properties. They were the first of a number of international sanctions against Iran. The crisis ended with the Algiers Accords in January 1981. Under the terms of the agreement and Iran's compliance, the hostaged diplomats were allowed to leave Iran. One of the chief provisions of the Accords was that the United States would lift the freeze on Iranian assets and remove trade sanctions. The agreement also established the Iran–U.S. Claims Tribunal in The Hague to handle claims brought by Americans against Iran, as well as claims by Iran against Americans and the former shah. The diplomatic ties remain severed, with Switzerland and Pakistan handling each country's interests. American intelligence and logistical support played a crucial role in arming Iraq in the Iran–Iraq War. However, Bob Woodward states that the United States gave information to both sides, hoping "to engineer a stalemate". In search for a new set or order in this region, Washington adopted a policy designed to contain both sides economically and militarily. During the second half of the Iran–Iraq War, the Reagan administration pursued several sanction bills against Iran; on the other hand, it established full diplomatic relations with Saddam Hussein's Ba'athist government in Iraq by removing it from the US list of State Sponsors of Terrorism in 1984. According to the U.S. Senate Banking Committee, the administrations of Presidents Reagan and George H. W. Bush authorized the sale to Iraq of numerous dual-use items, including poisonous chemicals and deadly biological viruses, such as anthrax and bubonic plague. The Iran–Iraq War ended with both agreeing to a ceasefire in 1988. Hezbollah, an Iran-backed Shi'ite Islamist group, has carried out multiple anti-American attacks, including the 1983 U.S. Embassy bombing in Beirut (killing 63, including 17 Americans), the Beirut barracks bombing (killing 241 U.S. Marines), and the 1996 Khobar Towers bombing. U.S. courts have ruled Iran responsible for these attacks, with evidence showing Hezbollah operated under Iran’s direction and that Supreme Leader Ali Khamenei authorized the Khobar Towers bombing. According to the Tower Commission report: In 1983, the U.S. helped bring to the attention of Tehran the threat inherent in the extensive infiltration of the government by the communist Tudeh Party and Soviet or pro-Soviet cadres in the country. Using this information, the Khomeini government took measures, including mass executions, that virtually eliminated the pro-Soviet infrastructure in Iran. To evade congressional rules regarding an arms embargo, officials in President Ronald Reagan's administration arranged in the mid-1980s to sell arms to Iran in an attempt to improve relations and obtain their influence in the release of hostages held in Lebanon. Oliver North of the National Security Council diverted proceeds from the arms sale to fund anti-Marxist Contra rebels in Nicaragua. In November 1986, Reagan issued a statement denying the arms sales. One week later, he confirmed that weapons had been transferred to Iran, but denied that they were part of an exchange for hostages. Later investigations by Congress and an independent counsel disclosed details of both operations and noted that documents relating to the affair were destroyed or withheld from investigators by Reagan administration officials on national security grounds. The revelation that profits from the arms sales had been illegally funneled to the Contras created a major political scandal for Reagan. In 1988, the U.S. launched Operation Praying Mantis in retaliation for Iran mining the Persian Gulf during the Iran–Iraq War, following Operation Nimble Archer. It was the largest American naval operation since World War II, with strikes that destroyed two Iranian oil platforms and sank a major warship. Iran sought reparations at the International Court of Justice, but the court dismissed the claim. The attack helped pressure Iran into agreeing to a ceasefire with Iraq later that year. On July 3, 1988, during the Iran–Iraq War, the U.S. Navy's USS Vincennes mistakenly shot down Iran Air Flight 655, a civilian Airbus A300B2, killing 290 people. The U.S. initially claimed the aircraft was a warplane and outside the civilian air corridor, but later acknowledged the downing was an accident in a combat zone. Iran, however, argued it was gross negligence and sued the U.S. in the International Court of Justice, resulting in compensation for the victims' families. The U.S. expressed regret, calling it a tragic accident, while the Vincennes crew received military honors. Newly elected U.S. president George H. W. Bush announced a "goodwill begets goodwill" gesture in his inaugural speech on January 20, 1989. The Bush administration urged President of Iran Akbar Hashemi Rafsanjani to use Iran's influence in Lebanon to obtain the release of the remaining US hostages held by Hezbollah. Bush indicated there would be a reciprocal gesture toward Iran by the United States. Relevant background events during the first year of Bush's administration include the ending of the Iran–Iraq War and the death of Ayatollah Khomeini. Khomeini believed he had a sacred duty to purge Iran of what he saw as Western corruption and moral decay, aiming to restore the country to religious purity under Islamic theocratic rule. Khomeini was succeeded by Ali Khamenei. In 1990, Iraq invaded Kuwait, which led to the Gulf War. The U.S. persuaded Iran to vote in favor of UN resolution 678 - which issued an ultimatum for Iraq to withdraw from Kuwait - by promising to lift its objections to a series of World Bank loans. The first loan, totaling $250 million, was approved just one day before the ground assault on Iraq began. The war ended on February 28, 1991. In April 1995, a total oil and trade embargo on dealings with Iran by American companies was imposed by Bill Clinton. This ended trade, which had been growing following the end of the Iran–Iraq War. The next year, the American Congress passed the Iran-Libya Sanctions act, designed to prevent other countries from making large investments in Iranian energy. The act was denounced by the European CC as invalid. In January 1998, newly elected Iranian President Mohammad Khatami called for a "dialogue of civilizations" with the United States. In the interview, Khatami invoked Alexis de Tocqueville's Democracy in America to explain similarities between American and Iranian quests for freedom. American Secretary of State Madeleine Albright responded positively. This brought free travel between the countries as well as an end to the American embargo of Persian carpets and pistachios. Relations then stalled due to opposition from Iranian conservatives and American preconditions for discussions, including changes in Iranian policy on Israel, nuclear energy, and support for terrorism. Four members of the United States Congress: Senator Arlen Specter, Representative Bob Ney, Representative Gary Ackerman, and Representative Eliot L. Engel held informal talks in New York City with several Iranian leaders in August 2000. The Iranians included Mehdi Karroubi, speaker of the Majlis of Iran (Iranian Parliament); Maurice Motamed, a Jewish member of the Majlis; and three other Iranian parliamentarians. Iran–United States relations during the George W. Bush administration (2001–2009) were marked by heightened tensions, mutual distrust, and periodic attempts at limited engagement. Following the September 11 attacks in 2001, Iran initially was sympathetic with the United States. Relations deteriorated sharply after President George W. Bush labeled Iran part of the "Axis of Evil" in 2002, accusing the country of pursuing weapons of mass destruction that posed a threat to the U.S. In 2003, Swiss Ambassador Tim Guldimann relayed an unofficial proposal to the U.S. outlining a possible "grand bargain" with Iran. He claimed it was developed in cooperation with Iran, but it lacked formal Iranian endorsement, and the Bush administration did not pursue the offer. Between 2003 and 2008, Iran accused the United States of repeatedly violating its territorial sovereignty through drone incursions, covert operations, and support for opposition groups. In August 2005, Mahmoud Ahmadinejad became Iran's president. During his presidency, attempts at dialogue, including a personal letter to President Bush, were dismissed by U.S. officials, while public tensions grew over Iran’s nuclear program, U.S. foreign policy, and Ahmadinejad’s controversial remarks at international forums. The United States intensified covert operations against Iran, including alleged support for militant groups such as PEJAK and Jundullah, cross-border activities, and expanded CIA and Special Forces missions. Iran was repeatedly accused by the U.S. of arming and training Iraqi insurgents, including Shiite militias and groups linked to Hezbollah, with American officials citing captured weapons, satellite images, and detainee testimonies. During this period, additional flashpoints included the U.S. raid on Iran's consulate in Erbil, sanctions targeting Iranian financial institutions, a naval dispute in the Strait of Hormuz, and the public disclosure of covert action plans against Iran.[citation needed] Iran–United States relations during the Obama administration (2009–2017) were defined by a shift from confrontation to cautious engagement, culminating in the landmark nuclear agreement of 2015. At the start of Obama's presidency, both sides exchanged public messages signaling a possible thaw, with Iran voicing long-standing grievances and the United States calling for mutual respect and responsibility. However, after Mahmoud Ahmadinejad's disputed re-election in 2009, which sparked mass protests and allegations of fraud, the United States responded with skepticism and concern. In late 2011 and early 2012, Iran threatened to close the Strait of Hormuz and warned a U.S. aircraft carrier not to return to the Persian Gulf. The U.S. rejected the warning and maintained its naval presence, while experts doubted Iran’s ability to sustain a blockade. The 2013 election of President Hassan Rouhani, seen as a moderate, marked a shift in tone, with his outreach at the UN and a historic phone call with Obama signaling renewed diplomatic engagement. While high-level contact resumed and symbolic gestures were exchanged, conservative backlash in Iran highlighted internal divisions over rapprochement. In 2015, the United States and other world powers reached the Joint Comprehensive Plan of Action (JCPOA) with Iran, under which Iran agreed to limit its nuclear program in exchange for sanctions relief. The agreement marked a major diplomatic achievement for the Obama administration, though it faced skepticism in Congress and mixed public support in the U.S. Despite the JCPOA, tensions between the United States and Iran persisted over ballistic missile tests, continued U.S. sanctions, and European business hesitancy due to fear of U.S. penalties. The administration also faced criticism for its handling of these issues, both from Iran and from political opponents. Iran–United States relations during the first Trump administration (2017–2021) were marked by a sharp policy shift from Obama's engagement-oriented approach. Trump began with a travel ban affecting Iranian citizens, and withdrew from the Joint Comprehensive Plan of Action (JCPOA). A broader maximum pressure campaign followed, with over 1,500 sanctions targeting Iran’s financial, oil, and shipping sectors, as well as foreign firms doing business with Iran, severely damaging its economy. The effort aimed to isolate Iran but met with strong resistance—even from U.S. allies—and often left Washington diplomatically isolated. Iran responded by threatening to resume unrestricted uranium enrichment; rejecting negotiations with the Trump administration, and intensified rhetoric. Tensions escalated in 2019 with U.S. intelligence reports of Iranian threats, attacks on oil tankers, the downing of a U.S. drone by Iran, and suspected Iranian strikes on Saudi oil facilities. President Trump called off retaliatory strikes, using for cyberattacks and additional sanctions instead. A major escalation followed a December 2019 rocket attack on the K-1 Air Basein Iraq, which led to American airstrikes on Iranian-backed militias and a retaliatory attack on the U.S. embassy in Baghdad. On January 3, 2020, the U.S. assassinated Iranian General Qasem Soleimani in a drone strike, prompting Iranian missile attacks on U.S. bases in Iraq and heightened fears of war. The crisis deepened with the accidental downing of a Ukrainian passenger plane by Iranian forces and continued through early 2020 with retaliatory strikes and threats. Later in 2020, Iran blamed U.S. sanctions for limiting its COVID-19 response. They launched a military satellite, and were later accused of interfering in the U.S. presidential election and proxy attacks. Relations ended under Trump with continued hostility and unresolved disputes. Iran–United States relations during the Biden administration (2021–2025) were shaped by efforts to revive the 2015 nuclear agreement alongside ongoing regional tensions, sanctions, cyberattacks, and proxy conflicts. Early in Joe Biden’s presidency, U.S. officials expressed interest in returning to the Joint Comprehensive Plan of Action (JCPOA), but negotiations in Vienna eventually stalled. Iran increased uranium enrichment and imposed retaliatory sanctions, while the U.S. imposed new sanctions over missile programs, oil exports, and human rights abuses. Tensions persisted throughout this era, marked by recurring proxy attacks on U.S. bases, which intensified following the outbreak of the Gaza war in late 2023, and by subsequent American retaliatory strikes. The period also saw disputes over the assassination of Qasem Soleimani, and military escalations across the Gulf region. In 2023, a breakthrough occurred with a U.S.–Iran prisoner swap and the release of frozen Iranian funds, though indirect diplomacy remained fragile. Iran was later accused of interfering in the 2024 U.S. presidential election through cyber operations and AI disinformation. Alleged assassination plots targeting Donald Trump and dissidents on U.S. soil further strained relations. By late 2024, relations remained adversarial, marked by unresolved security disputes and growing mistrust. In February 2025, Trump said he had given the military and his advisors instructions to obliterate Iran if he were to be assassinated. He signed the return of the Maximum pressure campaign against the Iranian government. Trump called for talks for a nuclear peace agreement. In the Friday prayer of February 7, Khamenei dismissed negotiations and stated that the Iranian government should not make a deal with the US. February 9, Trump said that he would rather make a deal with Iranians than let them be bombed by Israel. In March 2025, Khamenei stated that he did not intend to negotiate with Trump. IRGC General Salami threatened United States military with devastation. Trump threatened he would hold Iranian regime to blame for any shots fired by Houthis. Putin and Trump reached an agreement that Iran should never be in a position to destroy Israel. In April, Ali Larijani, advisor to Khamenei, threatened Trump that Iran would make nuclear weapons. Islamic Republic military allegedly had recommended a preemptively strike on US military bases. In April President Trump stated that Iranians want direct negotiations. Iran–United States negotiations began on April 12, 2025. In April 2025, U.S. Congressmen Joe Wilson and Jimmy Panetta introduced a 'Free Iraq from Iran' bill. The legislation mandates the development of a comprehensive U.S. strategy to irreversibly dismantle Iran-backed militias, including the Popular Mobilization Forces (PMF), and calls for the suspension of U.S. assistance to Iraq until these militias are fully removed. The bill also imposes sanctions on Iraqi political and military figures aligned with Iran, and provides support for Iraqi citizens and independent media to expose abuses committed by these militias. Its primary objective is to restore Iraq’s sovereignty and reduce Iranian dominance without resorting to direct military intervention. In his speech his May 2025 trip to Middle East Trump called out the Iranians as the most destructive force and denounced the Iran leaders for having managed to be turning green fertile farmland and rolling blackouts calling on it to make a choice between war and violence and making a deal. On June 7, 2025, the U.S. Treasury Department imposed sanctions on 10 individuals and 27 entities, including Iranian nationals and firms based in the UAE and Hong Kong. These targets include the Zarringhalam brothers, accused of laundering billions via shell companies tied to the IRGC and Iran’s Central Bank. The funds reportedly supported Iran’s nuclear and missile programs, oil sales, and militant proxies. Relations further worsened after Iranian authorities threatened to attack U.S. and allied bases in the Middle East if the former were to involve itself in the Iran–Israel war, after which the U.S. intervened in the Israeli attacks and bombed key Iranian nuclear sites. On June 21, 2025, the U.S., under the orders of President Trump, bombed 3 Iranian nuclear enrichment sites, Fordow, Natanz, and Isfahan, briefly joining the Iran-Israel war. Following the attacks, Iran pulled out and suspended nuclear talks indefinitely. During the 2025–2026 Iranian protests, Trump repeatedly stated to the Iranian authorities that the US would "intervene" if the Iranian government did not halt its crackdown on protesters. In January 2026, during the widespread and increasingly violent crackdown on nationwide protests in Iran, Trump stated that the US was seriously considering a range of responses, including potential military options, if Iranian government actions crossed established "red line"s. Trump said that the US military was evaluating "very strong options" and that senior advisers were scheduled to discuss possible actions, while also noting that Iranian leaders had reached out for negotiations. Iranian officials stated that Iranian forces would retaliate against US forces and allied interests in the region if US forces attacked. On January 16, 2026, Trump announced that the Iranian leadership had reportedly canceled over 800 planned executions. On January 27, 2026, following the deployment of the USS Abraham Lincoln to the region the day before, a multi-day US aerial military drill in the region was announced. On the same day Trump said: "There is another beautiful war fleet sailing toward Iran right now." In addition, The Pentagon ordered US Air Force F-15E Strike Eagles that were deployed at Royal Air Force Lakenheath in Britain to move forward to American air bases in Jordan. In February 2026, following the heightened tensions in January, Iran and the United States held a new round of diplomatic talks in Muscat, Oman, with Oman acting as a mediator. The discussions primarily focused on Iran's nuclear program and the potential easing of economic sanctions. The Iranian delegation was led by Foreign Minister Abbas Araghchi, while the U.S. delegation included senior officials from the State Department. During the talks, the United States reportedly sought to address Iran's missile program and regional activities, whereas Iran emphasized that negotiations should remain limited to nuclear issues and sanctions relief. Both sides expressed commitment to continuing diplomatic engagement, though no immediate agreement was announced. Economic relations Trade between Iran and the United States reached $623 million in 2008. According to the United States Census Bureau, American exports to Iran reached $93 million in 2007 and $537 million in 2008. American imports from Iran decreased from $148 million in 2007 to $86 million in 2008. This data does not include trade conducted through third countries to circumvent the trade embargo. It has been reported that the United States Treasury Department has granted nearly 10,000 special licenses to American companies over the past decade to conduct business with Iran. US exports to Iran include[when?] cigarettes (US$73 million); corn (US$68 million); chemical wood pulp, soda or sulfate (US$64 million); soybeans (US$43 million); medical equipment (US$27 million); vitamins (US$18 million); and vegetable seeds (US$12 million). In May 2013, US President Barack Obama lifted a trade embargo of communications equipment and software to non-government Iranians. In June 2013, the Obama administration expanded its sanctions against Iran, targeting its auto industry and, for the first time, its currency. According to a 2014 study by the National Iranian American Council, sanctions cost the US over $175 billion in lost trade and 279,000 lost job opportunities. According to Business Monitor International: The tentative rapprochement between Iran and the US, which began in the second half of 2013, has the potential to become a world-changing development, and unleash tremendous geopolitical and economic opportunities, if it is sustained. Tehran and Washington have been bitter enemies since 1979, when the Iranian Revolution overthrew the pro-American Shah and replaced him with a virulently anti-American Islamist regime. Since then, Iran has been at the vanguard of countries actively challenging the US-led world order. This has led to instability in the Middle East, and Iran's relative isolation in international affairs. Yet, if Iran and the US were to achieve a diplomatic breakthrough, geopolitical tensions in the Middle East could decline sharply, and Iran could come to be perceived as a promising emerging market in its own right. On April 22, 2019, under the Trump administration, the U.S. demanded that buyers of Iranian oil stop purchases or face economic penalties, announcing that the six-month sanction exemptions for China, India, Japan, South Korea and Turkey instated a year prior would not be renewed and would end by May 1. The move was seen as an attempt to completely stifle Iran's oil exports. Iran insisted the sanctions were illegal and that it had attached "no value or credibility" to the waivers. U.S. Secretary of State Mike Pompeo said President Trump's decision not to renew the waivers showed his administration was "dramatically accelerating our pressure campaign in a calibrated way that meets our national security objectives while maintaining well supplied global oil markets". On April 30, Iran stated it would continue to export oil despite U.S. pressure. On May 8, 2019, exactly one year after the Trump administration withdrew the United States from the Joint Comprehensive Plan of Action, the U.S. imposed a new layer of duplicate sanctions on Iran, targeting its metal exports, a sector that generates 10 percent of its export revenue. The move came amid escalating tension in the region and just hours after Iran threatened to start enriching more uranium if it did not get relief from U.S. measures that are crippling its economy. The Trump administration has said the sanctions will only be lifted if Iran fundamentally changes its behavior and character. On June 24, 2019, following continued escalations in the Strait of Hormuz, President Donald Trump announced new targeted sanctions against Iranian and IRGC leadership, including Supreme Leader Ali Khamenei and his office. IRGC targets included Naval commander Alireza Tangsiri, Aerospace commander Amir Ali Hajizadeh, and Ground commander Mohammad Pakpour, and others. U.S. Treasury Secretary Steven Mnuchin said the sanctions will block "billions" in assets. The US Treasury Department Financial Crimes Enforcement Network imposed a measure that further prohibits the US banking system from use by an Iranian bank, thereby requiring US banks to step up due diligence on the accounts in their custody. The US Department of Treasury's Office of Foreign Assets Control (OFAC) blacklisted four Iranian metal sector organizations along with their foreign subsidiaries, on July 23, 2020. One German subsidiary and three in the United Arab Emirates – owned and controlled by Iran's biggest steel manufacturer, Mobarakeh Steel Company – were also blacklisted by Washington for yielding millions of dollars for Iran's aluminum, steel, iron, and copper sectors. The sanctions froze all US assets controlled by the companies in question and further prohibited Americans from associating with them. On October 8, 2020, the U.S. Treasury Department placed sanctions on 18 Iranian banks. Any American connection to these banks is to be blocked and reported to the Office of Foreign Assets Control, and, 45 days after the sanctions take effect, anyone transacting with these banks may "be subject to an enforcement action." Treasury Secretary Steven Mnuchin said the goal was to pressure Iran to end nuclear activities and terrorist funding. On October 30, 2020, it was revealed that the US had "seized Iranian missiles shipped to Yemen", and it had "sold 1.1 million barrels of previously seized Iranian oil that was bound for Venezuela" in two shipments: the Liberia-flagged Euroforce and Singapore-flagged Maersk Progress, and sanctioned 11 new Iranian entities. In February 2026, Donald Trump issued an executive order imposing tariffs of up to 25 percent on nations engaged in trade with Iran. The policy focuses on Iran’s commercial partners rather than the country itself and aims to reduce international transactions with Iran, even as diplomatic talks continued. See also References It is natural that our Islamic system should be viewed as an enemy and an intolerable rival by such an oppressive power as the United States, which is trying to establish a global dictatorship and further its own interests by dominating other nations and trampling on their rights. It is also clear that the conflict and confrontation between the two is something natural and unavoidable. [Address by Ali Khamenei, the Supreme Leader of Iran, to students at Shahid Beheshti University, May 12, 2003] To give up this trump card—the non-relationship with the United States, the easy evocation of an external bogeyman—would be costly for the Iranian leadership. It would be a Gorbachevian signal that the revolution is entering a dramatically new phase—one Iran's leaders cannot be certain of surviving in power. Further reading External links
========================================