text
stringlengths
0
473k
[SOURCE: https://en.wikipedia.org/wiki/KB_Toys] | [TOKENS: 3930]
Contents KB Toys KB Toys (stylized K·B Toys; Kay Bee Toys prior to 1981) was an American chain of mall-based retail toy stores. The company was founded in 1922 as Kaufman Brothers, a wholesale candy store. The company opened a wholesale toy store in 1946 and ended its candy wholesaling two years later to emphasize its toy products. Retail sales began during the 1970s, using the name Kay-Bee Toy & Hobby. In 1999, the company operated 1,324 stores across the United States and was the second-largest toy retailer in the U.S., but it began to struggle in the early 21st century, declaring bankruptcy in both 2004 and 2008 before going out of business on February 9, 2009. The company operated 461 stores at the time of its closure. International retailer Toys "R" Us acquired the remains of KB Toys, consisting mainly of its website, trademarks, and intellectual property rights. Strategic Marks, a company that buys and revives defunct brands, purchased the brand in 2016, and planned to open new stores using the name beginning in 2019; plans for this revival, however, were cancelled due to a lack of funding. History Brothers Harry and Joseph Kaufman originally opened a wholesale candy store, Kaufman Brothers, in Pittsfield, Massachusetts on April 1, 1922. During the 1940s, the brothers acquired a wholesale toy company from a candy client who owed them money for outstanding debts. On September 21, 1946, Kaufman Brothers opened a wholesale toy store at 70 Columbus Avenue in Pittsfield, marking the company's entry into the wholesale toy industry. In 1948, Kaufman Brothers Inc. ended its involvement in the candy business to focus entirely on the toy business, which was thriving by that time. In 1973, the company ended its toy wholesaling to become a shopping mall-based toy retailer known as Kay-Bee Toy & Hobby, with "Kay-Bee" named after the initials in "Kaufman Brothers". The company had 26 stores at the time. In 1977, the company name changed to Kay-Bee Toy and Hobby Shops Inc. By 1979, the company was based in Lee, Massachusetts. The company opened 40 new stores during that year, and stated that it was the nation's fastest-growing toy store chain, with 170 locations across the Midwestern and Eastern United States. Timeline 1922: On April 1, 1922, brothers Joseph and Harry Kaufman open a wholesale candy and soda fountain supply business, at 16 Gamwell Court, Pittsfield. (Berkshire Eagle, April 1, 1952.) 1940s: The business moved to 70 Columbus Avenue, Pittsfield. (Berk. Eagle, Sept. 21, 1946, at 9.) They were in the wholesale candy business for 25 years, in 1946 the business enters the wholesale toy business, by accident when they were given a wholesale toy company from a client in lieu of payment of debts. Also, Kaufman Brothers negotiated a license with Louis Marx and Co. to sell a line of toy trains. It was a good time to diversify because during World War II there was a shortage of candy ingredients, especially sugar. As wholesalers, they covered 400 stores within a 150-mile radius of the offices. 1948: The wholesale toy business is thriving, the brothers decide to focus on toys exclusively; so they sold the wholesale candy business to Giftos Brothers. 1955: Joseph dies, Harry becomes President. Other family members working in the business are Harry’s sons Richard and Donald, and Joseph’s son Howard, and son-in-law Harry (Buddy) Baker. 1960: The business runs its first retail toy store, in Winsted, CT, which was also given to the business in lieu of payment of debts. 1961: The business opens its first toy store in New Hartford, NY. The business moves offices to 125 Pecks Road, Pittsfield. 1963: the business opens a retail toy store called Playworld in the Allendale Shopping Center, Pittsfield 1968: The business opens its first mall store, in Eastfield Mall, Springfield, MA 1970: The business moves headquarters to Route 102, Lee, Ma. 1973: The business moves out of toy wholesaling, to focus exclusively focus on the toy retailing business, renaming itself Kay-Bee Toy & Hobby Shops, Inc. By now, it has 26 stores. In February, Howard becomes President, a role he held until retiring in 1986. (Berk. Eagle Dec. 17, 2006.) 1976: The business now has 65 stores, in New England, New York, and New Jersey. 1977: The business changes its name to K-B Toys. 1981: The business is operating 250 stores, in over 40 states, with over $140 million in sales. On August 1, the business is purchased by Melville Corp. and they rename the shops Kay-Bee Toy Stores. Richard and Donald Kaufman retire. 1981-86: Kay-Bee acquires: 19 Kids Toy Stores, 52 Toy World Stores, 13 Toy Fair Stores, 2 Toy Kingdom Stores, 5 Macs Toys, and 11 Rizzi Toys and 330 Circus World stores. The business becomes the nation’s largest retail toy chain. It employs almost 7,000 employees, and opens 330 more Kay-Bee Stores, and Toy Works Stores, adding $260 million in sales, so that by the end of 1985, Kay-Bee became the largest mall-based toy retailer with 585 stores, doing over $400 million in sales. 1986: March 31, Howard retires, work begins on a new headquarters at 100 West St. Pittsfield, MA. Long-time buyer and executive, Saul Rubenstein is named President. In 1981, the Melville Corporation purchased the company from the Kaufman family for $64.2 million (~$187 million in 2024). At the time, the company had 210 stores. Richard Kaufman, the son of Harry Kaufman, retired that year from his position as company president. Donald Kaufman, Richard's brother, also once served as a vice president for the company. In 1983, the bankrupt Wickes Companies, based in California, sold 37 of its 45 Toy World stores for $5.5 million to Kay-Bee Toy & Hobby, which took over the leases of the acquired stores. As of 1990, the company advertised itself as "The Toy Store in the Mall." That year, Melville Corporation purchased Circus World's 330 stores in 32 states for $95 million; the locations became part of the Kay-Bee division. In 1991, Kay-Bee Toys purchased K&K Toys' 136 stores, located in 18 states; the stores were converted to Kay-Bee stores the following year. During 1993 and 1994, as part of a major restructuring plan, Kay-Bee closed approximately 250 stores that had underperformed. The company became a direct competitor to Toys "R" Us in 1994, when it expanded its mall locations and began opening stores known as KB Toy Works, which operated in strip malls and sold current and closeout toys. KB Toy Works stores were larger than regular KB Toys stores, which averaged 3,500 sq ft (330 m2). Additionally, the company operated KB Toy Outlet stores, also known as KB Toy Liquidators; these stores were located in outlet malls and sold closeout toys. During holiday seasons, KB Toys operated temporary stores in malls known as KB Toy Express. In 1996, Kay-Bee had sales of $1.1 billion, and was sold that year to Consolidated Stores Corporation at a cost of $315 million. Company sales reached $1.6 billion in 1998, the same year that its merchandise website was launched. The store logo was also changed to "KB" that year. As of May 1999, KB Toys operated 1,324 stores. That month, Consolidated Stores announced a deal with BrainPlay.com (which provided toy sales information) to operate KBToys.com. Through the deal, Consolidated Stores would invest $80 million and would own 80% of the new website, while BrainPlay would own the remainder. The new website would be based at BrainPlay's headquarters in Denver, and BrainPlay's website would become KB Toys' new website, which would compete against Toys "R" Us' website and eToys.com. KB Toys' website was revamped and relaunched in July 1999, as KBKids.com. At the time, KB Toys was the second-largest toy retailer in the United States. To increase the online presence for KBKids, Consolidated Stores partnered with AOL, which was visible to 17 million potential customers online. Through the agreement, AOL would provide links to the KBKids website. In September 1999, Consolidated Stores announced plans to sell 20% of KBKids through shares in an upcoming public offering. In October 1999, KBKids.com launched a $43 million (~$75.9 million in 2024) advertising campaign, including television commercials, to promote the site ahead of the holiday shopping season. In January 2000, Consolidated Stores filed with the U.S. Securities and Exchange Commission to have KBKids listed on the NASDAQ as a separate and publicly traded company with the ticker symbol "KBKD". The initial public offering was valued at $210 million. Consolidated Stores was unable to earn considerable revenue from KB Toys, and experienced financial losses during 1999 and 2000, partially caused by spending on KBKids.com; another factor was decreased video game sales at KB Toys locations. In June 2000, Consolidated Stores withdrew its plans for KBKids to become a public company, and announced plans to sell KB Toys. In December 2000, Bain Capital purchased the company for $305 million, in partnership with KB Toys' management team. The investment group included 200 store managers led by Bain Capital and by KB Toys' chief executive officer Michael Glazer. Bain Capital contributed $18.1 million to the sale, while the remainder was financed by banks that lent the money to KB Toys. The KB Toys sale included its various divisions: KB Toy Works, KB Toy Outlet, KB Toy Liquidators, KB Toy Express, and KBKids.com. The sale ended KB Toys' two decades as a subsidiary, turning it into a private company. KB Toys began focusing more on video games, which accounted for 20% of the company's revenue as of 2001. Starting that year, KB Toys opened temporary "stores within a store" at select Sears department stores during the Christmas season. The stores were initially known as "KB Toys at Sears", and averaged 1,500 sq ft (140 m2). During 2001, KB Toys agreed to pay approximately $5.4 million (~$9.11 million in 2024) to acquire several inventory lots from the bankrupt eToys. In April 2002, through dividend recapitalization, Bain Capital received an $85 million payment from KB Toys, which financed the payment through $66 million in bank loans. Glazer received $18 million, while $16 million was divided among other executives. KB Toys suffered tough competition during the 2003 Christmas season, in addition to expensive store leases in malls with decreased customer visitation. Approximately 950 of the company's 1,217 stores were located in malls. With $300 million (~$462 million in 2024) in debt, KB Toys filed for Chapter 11 bankruptcy protection in January 2004 and subsequently closed more than 600 stores, resulting in the layoffs of more than 3,400 of the company's 13,000 employees. Creditors stated that the 2002 dividend deal with Bain Capital had rendered KB Toys insolvent, resulting in a loss of $109 million leading up to the bankruptcy filing. Bain Capital stated that KB Toys was financially well at the time of the dividend deal, and that the company's financial problems later on were unrelated to the deal. In February 2005, KB Toys' creditors, including Hasbro and Lego, accused the company's top executives and majority shareholders of improperly providing themselves with multimillion-dollar payments prior to the bankruptcy. The creditors, referring to the April 2002 deal, alleged that the payments occurred during a decline in the economy and in KB Toys' business, and that the payments had a "devastating impact" on the company. During the same month, Big Lots (formerly Consolidated Stores) filed a lawsuit against Bain Capital, alleging it was owed $45 million (~$77.7 million in 2024) from the 2000 sale. Big Lots' lawsuit was dismissed in 2006. KB Toys exited Chapter 11 bankruptcy in August 2005, with 90% of its ownership under PKBT Holdings, an affiliate of Prentice Capital Management. Bain Capital had attempted to retain control of KB Toys, which was instead awarded to Prentice Capital by a bankruptcy judge. Through the bankruptcy emerging plan, Prentice Capital invested $20 million into KB Toys. Gregory R. Staley, a former president for Toys "R" Us' U.S. and international units, was named as KB Toys' new chief executive officer. The company had 640 stores. In August 2007, the company announced a business strategy that included layoffs at its headquarters in Pittsfield, Massachusetts. That November, the company had 566 stores and began closing 122 of them. Because of poor sales at its mall-based locations, as well as competition, the company filed for Chapter 11 bankruptcy on December 11, 2008. The chain began going-out-of-business sales that month. At the time, the company had 10,850 employees, including approximately 6,500 seasonal workers. The company had 277 mall locations, 114 KB Toy Outlet stores, 40 KB Toy Works stores, and 30 KB Toys Holiday stores, for a total of 461. It was the largest mall-based toy retailer in the United States at the time, operating in 44 states, as well as Guam and Puerto Rico. It was also the second-oldest operating toy retailer in North America (behind FAO Schwarz) before its demise.[citation needed] The store-closing sales (as well as the termination of the company's website) were concluded on February 9, 2009. The KB Toys brand and related intangible assets were sold by Streambank LLC to Toys "R" Us on September 4, 2009, for a reported $2.1 million (~$2.98 million in 2024). Because KB Toys' stores had been closed and liquidated, the sale applied mainly to the company's logo, website, trademarks, and other intellectual properties. Toys "R" Us was initially unsure of how to integrate the KB name into its business plan. Toys "R" Us has used the KB Toys name on self-manufactured toys under the name "KB Classics" with the KB Toys logo. Strategic Marks, LLC, a company that buys and revives defunct brands, registered a trademark for KB Toys in 2016, after Toys "R" Us allowed the previous registration to lapse. In March 2018, Strategic Marks founder Ellia Kassoff stated that due to Toys "R" Us going out of business in the United States, Strategic Marks planned to open 1,000 KB Toys pop-up stores across America for Black Friday (November 2018). After the holiday season, Kassoff would decide which stores would then become permanent. In early November 2018, Kasoff announced that the relaunch would be delayed until 2019, allowing the company to begin with "as few missteps as possible". Kasoff stated that the delay would "give us plenty of time to build out the most optimum supply chain, distribution and retail infrastructure our customers deserve." Prior to the delay, there had been plans to open 400 to 600 seasonal pop-up stores in 2018, and 600 to 800 permanent stores within three to four years. In March 2019, Kasoff cited a lack of funding as the reason that the pop-up stores did not open as planned. He stated, "The toy companies had lots of conflicts of interest that prevented them from investing in KB given that they sell to other retailers, and mall operators don't typically invest in prospective tenants. It is taking a while to get this done and build out a strategy. Once we get the money together we will be off and running." Strategic Marks sought an investment bank to finance the opening of 200 to 250 temporary KB Toys stores, which would determine whether permanent locations would be viable. Another possible reason why the revival never fully materialized is because Strategic Marks failed to renew the trademark for KB Toys in 2020. Because of this, the trademark would become a 606 trademark, meaning that all of the KB Toys trademarks are abandoned. Lawsuits In December 1999, The Equal Rights Center (TERC) and two black customers filed a federal lawsuit against KB Toys over one of the company's policies in which personal checks could not be used to pay for purchases at certain stores that experienced unusually high rates of returned checks. TERC alleged that KB Toys' policy was discriminatory against black people, stating that the policy was enforced at eight stores in predominantly black neighborhoods located in the Baltimore–Washington metropolitan area. KB Toys denied the allegation, and stated that racial demographics were not a consideration when enacting the policy 13 years earlier. The company further stated that checks from white people were also not accepted at the stores specified in the lawsuit. By March 2000, the lawsuit had been amended to include three additional black plaintiffs, and the suit sought damages as well as an end to the company's check-writing policy. In January 2001, a U.S. District judge removed TERC from the case as it was not affected by KB Toys' check-writing policy. The lawsuit was resolved in favor of KB Toys in 2003. In 2001, the district attorney for Napa County, California filed a lawsuit alleging that KB Toys misrepresented sale prices and that it sold returned items as new. The case was settled in August 2003 for $1.2 million (~$1.96 million in 2024). In 2003, a class action lawsuit was filed in Chicago against KB Toys, alleging that the company's stores engaged in using deceptive price tags to manipulate consumers into believing that they were buying products at a discounted price. The lawsuit was settled with KB Toys providing a one-week 30 percent discount on purchases of $30 or more. References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/United_States#cite_note-62] | [TOKENS: 17273]
Contents United States The United States of America (USA), also known as the United States (U.S.) or America, is a country primarily located in North America. It is a federal republic of 50 states and a federal capital district, Washington, D.C. The 48 contiguous states border Canada to the north and Mexico to the south, with the semi-exclave of Alaska in the northwest and the archipelago of Hawaii in the Pacific Ocean. The United States also asserts sovereignty over five major island territories and various uninhabited islands in Oceania and the Caribbean.[j] It is a megadiverse country, with the world's third-largest land area[c] and third-largest population, exceeding 341 million.[k] Paleo-Indians first migrated from North Asia to North America at least 15,000 years ago, and formed various civilizations. Spanish colonization established Spanish Florida in 1513, the first European colony in what is now the continental United States. British colonization followed with the 1607 settlement of Virginia, the first of the Thirteen Colonies. Enslavement of Africans was practiced in all colonies by 1770 and supplied most of the labor for the Southern Colonies' plantation economy. Clashes with the British Crown began as a civil protest over the illegality of taxation without representation in Parliament and the denial of other English rights. They evolved into the American Revolution, which led to the Declaration of Independence and a society based on universal rights. Victory in the 1775–1783 Revolutionary War brought international recognition of U.S. sovereignty and fueled westward expansion, further dispossessing native inhabitants. As more states were admitted, a North–South division over slavery led the Confederate States of America to declare secession and fight the Union in the 1861–1865 American Civil War. With the United States' victory and reunification, slavery was abolished nationally. By the late 19th century, the U.S. economy outpaced the French, German and British economies combined. As of 1900, the country had established itself as a great power, a status solidified after its involvement in World War I. Following Japan's attack on Pearl Harbor in 1941, the U.S. entered World War II. Its aftermath left the U.S. and the Soviet Union as rival superpowers, competing for ideological dominance and international influence during the Cold War. The Soviet Union's collapse in 1991 ended the Cold War, leaving the U.S. as the world's sole superpower. The U.S. federal government is a representative democracy with a president and a constitution that grants separation of powers under three branches: legislative, executive, and judicial. The United States Congress is a bicameral national legislature composed of the House of Representatives (a lower house based on population) and the Senate (an upper house based on equal representation for each state). Federalism grants substantial autonomy to the 50 states. In addition, 574 Native American tribes have sovereignty rights, and there are 326 Native American reservations. Since the 1850s, the Democratic and Republican parties have dominated American politics. American ideals and values are based on a democratic tradition inspired by the American Enlightenment movement. A developed country, the U.S. ranks high in economic competitiveness, innovation, and higher education. Accounting for over a quarter of nominal global GDP, its economy has been the world's largest since about 1890. It is the wealthiest country, with the highest disposable household income per capita among OECD members, though its wealth inequality is highly pronounced. Shaped by centuries of immigration, the culture of the U.S. is diverse and globally influential. Making up more than a third of global military spending, the country has one of the strongest armed forces and is a designated nuclear state. A member of numerous international organizations, the U.S. plays a major role in global political, cultural, economic, and military affairs. Etymology Documented use of the phrase "United States of America" dates back to January 2, 1776. On that day, Stephen Moylan, a Continental Army aide to General George Washington, wrote a letter to Joseph Reed, Washington's aide-de-camp, seeking to go "with full and ample powers from the United States of America to Spain" to seek assistance in the Revolutionary War effort. The first known public usage is an anonymous essay published in the Williamsburg newspaper The Virginia Gazette on April 6, 1776. Sometime on or after June 11, 1776, Thomas Jefferson wrote "United States of America" in a rough draft of the Declaration of Independence, which was adopted by the Second Continental Congress on July 4, 1776. The term "United States" and its initialism "U.S.", used as nouns or as adjectives in English, are common short names for the country. The initialism "USA", a noun, is also common. "United States" and "U.S." are the established terms throughout the U.S. federal government, with prescribed rules.[l] "The States" is an established colloquial shortening of the name, used particularly from abroad; "stateside" is the corresponding adjective or adverb. "America" is the feminine form of the first word of Americus Vesputius, the Latinized name of Italian explorer Amerigo Vespucci (1454–1512);[m] it was first used as a place name by the German cartographers Martin Waldseemüller and Matthias Ringmann in 1507.[n] Vespucci first proposed that the West Indies discovered by Christopher Columbus in 1492 were part of a previously unknown landmass and not among the Indies at the eastern limit of Asia. In English, the term "America" usually does not refer to topics unrelated to the United States, despite the usage of "the Americas" to describe the totality of the continents of North and South America. History The first inhabitants of North America migrated from Siberia approximately 15,000 years ago, either across the Bering land bridge or along the now-submerged Ice Age coastline. Small isolated groups of hunter-gatherers are said to have migrated alongside herds of large herbivores far into Alaska, with ice-free corridors developing along the Pacific coast and valleys of North America in c. 16,500 – c. 13,500 BCE (c. 18,500 – c. 15,500 BP). The Clovis culture, which appeared around 11,000 BCE, is believed to be the first widespread culture in the Americas. Over time, Indigenous North American cultures grew increasingly sophisticated, and some, such as the Mississippian culture, developed agriculture, architecture, and complex societies. In the post-archaic period, the Mississippian cultures were located in the midwestern, eastern, and southern regions, and the Algonquian in the Great Lakes region and along the Eastern Seaboard, while the Hohokam culture and Ancestral Puebloans inhabited the Southwest. Native population estimates of what is now the United States before the arrival of European colonizers range from around 500,000 to nearly 10 million. Christopher Columbus began exploring the Caribbean for Spain in 1492, leading to Spanish-speaking settlements and missions from what are now Puerto Rico and Florida to New Mexico and California. The first Spanish colony in the present-day continental United States was Spanish Florida, chartered in 1513. After several settlements failed there due to starvation and disease, Spain's first permanent town, Saint Augustine, was founded in 1565. France established its own settlements in French Florida in 1562, but they were either abandoned (Charlesfort, 1578) or destroyed by Spanish raids (Fort Caroline, 1565). Permanent French settlements were founded much later along the Great Lakes (Fort Detroit, 1701), the Mississippi River (Saint Louis, 1764) and especially the Gulf of Mexico (New Orleans, 1718). Early European colonies also included the thriving Dutch colony of New Nederland (settled 1626, present-day New York) and the small Swedish colony of New Sweden (settled 1638 in what became Delaware). British colonization of the East Coast began with the Virginia Colony (1607) and the Plymouth Colony (Massachusetts, 1620). The Mayflower Compact in Massachusetts and the Fundamental Orders of Connecticut established precedents for local representative self-governance and constitutionalism that would develop throughout the American colonies. While European settlers in what is now the United States experienced conflicts with Native Americans, they also engaged in trade, exchanging European tools for food and animal pelts.[o] Relations ranged from close cooperation to warfare and massacres. The colonial authorities often pursued policies that forced Native Americans to adopt European lifestyles, including conversion to Christianity. Along the eastern seaboard, settlers trafficked Africans through the Atlantic slave trade, largely to provide manual labor on plantations. The original Thirteen Colonies[p] that would later found the United States were administered as possessions of the British Empire by Crown-appointed governors, though local governments held elections open to most white male property owners. The colonial population grew rapidly from Maine to Georgia, eclipsing Native American populations; by the 1770s, the natural increase of the population was such that only a small minority of Americans had been born overseas. The colonies' distance from Britain facilitated the entrenchment of self-governance, and the First Great Awakening, a series of Christian revivals, fueled colonial interest in guaranteed religious liberty. Following its victory in the French and Indian War, Britain began to assert greater control over local affairs in the Thirteen Colonies, resulting in growing political resistance. One of the primary grievances of the colonists was the denial of their rights as Englishmen, particularly the right to representation in the British government that taxed them. To demonstrate their dissatisfaction and resolve, the First Continental Congress met in 1774 and passed the Continental Association, a colonial boycott of British goods enforced by local "committees of safety" that proved effective. The British attempt to then disarm the colonists resulted in the 1775 Battles of Lexington and Concord, igniting the American Revolutionary War. At the Second Continental Congress, the colonies appointed George Washington commander-in-chief of the Continental Army, and created a committee that named Thomas Jefferson to draft the Declaration of Independence. Two days after the Second Continental Congress passed the Lee Resolution to create an independent, sovereign nation, the Declaration was adopted on July 4, 1776. The political values of the American Revolution evolved from an armed rebellion demanding reform within an empire to a revolution that created a new social and governing system founded on the defense of liberty and the protection of inalienable natural rights; sovereignty of the people; republicanism over monarchy, aristocracy, and other hereditary political power; civic virtue; and an intolerance of political corruption. The Founding Fathers of the United States, who included Washington, Jefferson, John Adams, Benjamin Franklin, Alexander Hamilton, John Jay, James Madison, Thomas Paine, and many others, were inspired by Classical, Renaissance, and Enlightenment philosophies and ideas. Though in practical effect since its drafting in 1777, the Articles of Confederation was ratified in 1781 and formally established a decentralized government that operated until 1789. After the British surrender at the siege of Yorktown in 1781, American sovereignty was internationally recognized by the Treaty of Paris (1783), through which the U.S. gained territory stretching west to the Mississippi River, north to present-day Canada, and south to Spanish Florida. The Northwest Ordinance (1787) established the precedent by which the country's territory would expand with the admission of new states, rather than the expansion of existing states. The U.S. Constitution was drafted at the 1787 Constitutional Convention to overcome the limitations of the Articles. It went into effect in 1789, creating a federal republic governed by three separate branches that together formed a system of checks and balances. George Washington was elected the country's first president under the Constitution, and the Bill of Rights was adopted in 1791 to allay skeptics' concerns about the power of the more centralized government. His resignation as commander-in-chief after the Revolutionary War and his later refusal to run for a third term as the country's first president established a precedent for the supremacy of civil authority in the United States and the peaceful transfer of power. In the late 18th century, American settlers began to expand westward in larger numbers, many with a sense of manifest destiny. The Louisiana Purchase of 1803 from France nearly doubled the territory of the United States. Lingering issues with Britain remained, leading to the War of 1812, which was fought to a draw. Spain ceded Florida and its Gulf Coast territory in 1819. The Missouri Compromise of 1820, which admitted Missouri as a slave state and Maine as a free state, attempted to balance the desire of northern states to prevent the expansion of slavery into new territories with that of southern states to extend it there. Primarily, the compromise prohibited slavery in all other lands of the Louisiana Purchase north of the 36°30′ parallel. As Americans expanded further into territory inhabited by Native Americans, the federal government implemented policies of Indian removal or assimilation. The most significant such legislation was the Indian Removal Act of 1830, a key policy of President Andrew Jackson. It resulted in the Trail of Tears (1830–1850), in which an estimated 60,000 Native Americans living east of the Mississippi River were forcibly removed and displaced to lands far to the west, causing 13,200 to 16,700 deaths along the forced march. Settler expansion as well as this influx of Indigenous peoples from the East resulted in the American Indian Wars west of the Mississippi. During the colonial period, slavery became legal in all the Thirteen colonies, but by 1770 it provided the main labor force in the large-scale, agriculture-dependent economies of the Southern Colonies from Maryland to Georgia. The practice began to be significantly questioned during the American Revolution, and spurred by an active abolitionist movement that had reemerged in the 1830s, states in the North enacted laws to prohibit slavery within their boundaries. At the same time, support for slavery had strengthened in Southern states, with widespread use of inventions such as the cotton gin (1793) having made slavery immensely profitable for Southern elites. The United States annexed the Republic of Texas in 1845, and the 1846 Oregon Treaty led to U.S. control of the present-day American Northwest. Dispute with Mexico over Texas led to the Mexican–American War (1846–1848). After the victory of the U.S., Mexico recognized U.S. sovereignty over Texas, New Mexico, and California in the 1848 Mexican Cession; the cession's lands also included the future states of Nevada, Colorado and Utah. The California gold rush of 1848–1849 spurred a huge migration of white settlers to the Pacific coast, leading to even more confrontations with Native populations. One of the most violent, the California genocide of thousands of Native inhabitants, lasted into the mid-1870s. Additional western territories and states were created. Throughout the 1850s, the sectional conflict regarding slavery was further inflamed by national legislation in the U.S. Congress and decisions of the Supreme Court. In Congress, the Fugitive Slave Act of 1850 mandated the forcible return to their owners in the South of slaves taking refuge in non-slave states, while the Kansas–Nebraska Act of 1854 effectively gutted the anti-slavery requirements of the Missouri Compromise. In its Dred Scott decision of 1857, the Supreme Court ruled against a slave brought into non-slave territory, simultaneously declaring the entire Missouri Compromise to be unconstitutional. These and other events exacerbated tensions between North and South that would culminate in the American Civil War (1861–1865). Beginning with South Carolina, 11 slave-state governments voted to secede from the United States in 1861, joining to create the Confederate States of America. All other state governments remained loyal to the Union.[q] War broke out in April 1861 after the Confederacy bombarded Fort Sumter. Following the Emancipation Proclamation on January 1, 1863, many freed slaves joined the Union army. The war began to turn in the Union's favor following the 1863 Siege of Vicksburg and Battle of Gettysburg, and the Confederates surrendered in 1865 after the Union's victory in the Battle of Appomattox Court House. Efforts toward reconstruction in the secessionist South had begun as early as 1862, but it was only after President Lincoln's assassination that the three Reconstruction Amendments to the Constitution were ratified to protect civil rights. The amendments codified nationally the abolition of slavery and involuntary servitude except as punishment for crimes, promised equal protection under the law for all persons, and prohibited discrimination on the basis of race or previous enslavement. As a result, African Americans took an active political role in ex-Confederate states in the decade following the Civil War. The former Confederate states were readmitted to the Union, beginning with Tennessee in 1866 and ending with Georgia in 1870. National infrastructure, including transcontinental telegraph and railroads, spurred growth in the American frontier. This was accelerated by the Homestead Acts, through which nearly 10 percent of the total land area of the United States was given away free to some 1.6 million homesteaders. From 1865 through 1917, an unprecedented stream of immigrants arrived in the United States, including 24.4 million from Europe. Most came through the Port of New York, as New York City and other large cities on the East Coast became home to large Jewish, Irish, and Italian populations. Many Northern Europeans as well as significant numbers of Germans and other Central Europeans moved to the Midwest. At the same time, about one million French Canadians migrated from Quebec to New England. During the Great Migration, millions of African Americans left the rural South for urban areas in the North. Alaska was purchased from Russia in 1867. The Compromise of 1877 is generally considered the end of the Reconstruction era, as it resolved the electoral crisis following the 1876 presidential election and led President Rutherford B. Hayes to reduce the role of federal troops in the South. Immediately, the Redeemers began evicting the Carpetbaggers and quickly regained local control of Southern politics in the name of white supremacy. African Americans endured a period of heightened, overt racism following Reconstruction, a time often considered the nadir of American race relations. A series of Supreme Court decisions, including Plessy v. Ferguson, emptied the Fourteenth and Fifteenth Amendments of their force, allowing Jim Crow laws in the South to remain unchecked, sundown towns in the Midwest, and segregation in communities across the country, which would be reinforced in part by the policy of redlining later adopted by the federal Home Owners' Loan Corporation. An explosion of technological advancement, accompanied by the exploitation of cheap immigrant labor, led to rapid economic expansion during the Gilded Age of the late 19th century. It continued into the early 20th, when the United States already outpaced the economies of Britain, France, and Germany combined. This fostered the amassing of power by a few prominent industrialists, largely by their formation of trusts and monopolies to prevent competition. Tycoons led the nation's expansion in the railroad, petroleum, and steel industries. The United States emerged as a pioneer of the automotive industry. These changes resulted in significant increases in economic inequality, slum conditions, and social unrest, creating the environment for labor unions and socialist movements to begin to flourish. This period eventually ended with the advent of the Progressive Era, which was characterized by significant economic and social reforms. Pro-American elements in Hawaii overthrew the Hawaiian monarchy; the islands were annexed in 1898. That same year, Puerto Rico, the Philippines, and Guam were ceded to the U.S. by Spain after the latter's defeat in the Spanish–American War. (The Philippines was granted full independence from the U.S. on July 4, 1946, following World War II. Puerto Rico and Guam have remained U.S. territories.) American Samoa was acquired by the United States in 1900 after the Second Samoan Civil War. The U.S. Virgin Islands were purchased from Denmark in 1917. The United States entered World War I alongside the Allies in 1917 helping to turn the tide against the Central Powers. In 1920, a constitutional amendment granted nationwide women's suffrage. During the 1920s and 1930s, radio for mass communication and early television transformed communications nationwide. The Wall Street Crash of 1929 triggered the Great Depression, to which President Franklin D. Roosevelt responded with the New Deal plan of "reform, recovery and relief", a series of unprecedented and sweeping recovery programs and employment relief projects combined with financial reforms and regulations. Initially neutral during World War II, the U.S. began supplying war materiel to the Allies of World War II in March 1941 and entered the war in December after Japan's attack on Pearl Harbor. Agreeing to a "Europe first" policy, the U.S. concentrated its wartime efforts on Japan's allies Italy and Germany until their final defeat in May 1945. The U.S. developed the first nuclear weapons and used them against the Japanese cities of Hiroshima and Nagasaki in August 1945, ending the war. The United States was one of the "Four Policemen" who met to plan the post-war world, alongside the United Kingdom, the Soviet Union, and China. The U.S. emerged relatively unscathed from the war, with even greater economic power and international political influence. The end of World War II in 1945 left the U.S. and the Soviet Union as superpowers, each with its own political, military, and economic sphere of influence. Geopolitical tensions between the two superpowers soon led to the Cold War. The U.S. implemented a policy of containment intended to limit the Soviet Union's sphere of influence; engaged in regime change against governments perceived to be aligned with the Soviets; and prevailed in the Space Race, which culminated with the first crewed Moon landing in 1969. Domestically, the U.S. experienced economic growth, urbanization, and population growth following World War II. The civil rights movement emerged, with Martin Luther King Jr. becoming a prominent leader in the early 1960s. The Great Society plan of President Lyndon B. Johnson's administration resulted in groundbreaking and broad-reaching laws, policies and a constitutional amendment to counteract some of the worst effects of lingering institutional racism. The counterculture movement in the U.S. brought significant social changes, including the liberalization of attitudes toward recreational drug use and sexuality. It also encouraged open defiance of the military draft (leading to the end of conscription in 1973) and wide opposition to U.S. intervention in Vietnam, with the U.S. totally withdrawing in 1975. A societal shift in the roles of women was significantly responsible for the large increase in female paid labor participation starting in the 1970s, and by 1985 the majority of American women aged 16 and older were employed. The Fall of Communism and the dissolution of the Soviet Union from 1989 to 1991 marked the end of the Cold War and left the United States as the world's sole superpower. This cemented the United States' global influence, reinforcing the concept of the "American Century" as the U.S. dominated international political, cultural, economic, and military affairs. The 1990s saw the longest recorded economic expansion in American history, a dramatic decline in U.S. crime rates, and advances in technology. Throughout this decade, technological innovations such as the World Wide Web, the evolution of the Pentium microprocessor in accordance with Moore's law, rechargeable lithium-ion batteries, the first gene therapy trial, and cloning either emerged in the U.S. or were improved upon there. The Human Genome Project was formally launched in 1990, while Nasdaq became the first stock market in the United States to trade online in 1998. In the Gulf War of 1991, an American-led international coalition of states expelled an Iraqi invasion force that had occupied neighboring Kuwait. The September 11 attacks on the United States in 2001 by the pan-Islamist militant organization al-Qaeda led to the war on terror and subsequent military interventions in Afghanistan and in Iraq. The U.S. housing bubble culminated in 2007 with the Great Recession, the largest economic contraction since the Great Depression. In the 2010s and early 2020s, the United States has experienced increased political polarization and democratic backsliding. The country's polarization was violently reflected in the January 2021 Capitol attack, when a mob of insurrectionists entered the U.S. Capitol and sought to prevent the peaceful transfer of power in an attempted self-coup d'état. Geography The United States is the world's third-largest country by total area behind Russia and Canada.[c] The 48 contiguous states and the District of Columbia have a combined area of 3,119,885 square miles (8,080,470 km2). In 2021, the United States had 8% of the Earth's permanent meadows and pastures and 10% of its cropland. Starting in the east, the coastal plain of the Atlantic seaboard gives way to inland forests and rolling hills in the Piedmont plateau region. The Appalachian Mountains and the Adirondack Massif separate the East Coast from the Great Lakes and the grasslands of the Midwest. The Mississippi River System, the world's fourth-longest river system, runs predominantly north–south through the center of the country. The flat and fertile prairie of the Great Plains stretches to the west, interrupted by a highland region in the southeast. The Rocky Mountains, west of the Great Plains, extend north to south across the country, peaking at over 14,000 feet (4,300 m) in Colorado. The supervolcano underlying Yellowstone National Park in the Rocky Mountains, the Yellowstone Caldera, is the continent's largest volcanic feature. Farther west are the rocky Great Basin and the Chihuahuan, Sonoran, and Mojave deserts. In the northwest corner of Arizona, carved by the Colorado River, is the Grand Canyon, a steep-sided canyon and popular tourist destination known for its overwhelming visual size and intricate, colorful landscape. The Cascade and Sierra Nevada mountain ranges run close to the Pacific coast. The lowest and highest points in the contiguous United States are in the State of California, about 84 miles (135 km) apart. At an elevation of 20,310 feet (6,190.5 m), Alaska's Denali (also called Mount McKinley) is the highest peak in the country and on the continent. Active volcanoes in the U.S. are common throughout Alaska's Alexander and Aleutian Islands. Located entirely outside North America, the archipelago of Hawaii consists of volcanic islands, physiographically and ethnologically part of the Polynesian subregion of Oceania. In addition to its total land area, the United States has one of the world's largest marine exclusive economic zones spanning approximately 4.5 million square miles (11.7 million km2) of ocean. With its large size and geographic variety, the United States includes most climate types. East of the 100th meridian, the climate ranges from humid continental in the north to humid subtropical in the south. The western Great Plains are semi-arid. Many mountainous areas of the American West have an alpine climate. The climate is arid in the Southwest, Mediterranean in coastal California, and oceanic in coastal Oregon, Washington, and southern Alaska. Most of Alaska is subarctic or polar. Hawaii, the southern tip of Florida and U.S. territories in the Caribbean and Pacific are tropical. The United States receives more high-impact extreme weather incidents than any other country. States bordering the Gulf of Mexico are prone to hurricanes, and most of the world's tornadoes occur in the country, mainly in Tornado Alley. Due to climate change in the country, extreme weather has become more frequent in the U.S. in the 21st century, with three times the number of reported heat waves compared to the 1960s. Since the 1990s, droughts in the American Southwest have become more persistent and more severe. The regions considered as the most attractive to the population are the most vulnerable. The U.S. is one of 17 megadiverse countries containing large numbers of endemic species: about 17,000 species of vascular plants occur in the contiguous United States and Alaska, and over 1,800 species of flowering plants are found in Hawaii, few of which occur on the mainland. The United States is home to 428 mammal species, 784 birds, 311 reptiles, 295 amphibians, and around 91,000 insect species. There are 63 national parks, and hundreds of other federally managed monuments, forests, and wilderness areas, administered by the National Park Service and other agencies. About 28% of the country's land is publicly owned and federally managed, primarily in the Western States. Most of this land is protected, though some is leased for commercial use, and less than one percent is used for military purposes. Environmental issues in the United States include debates on non-renewable resources and nuclear energy, air and water pollution, biodiversity, logging and deforestation, and climate change. The U.S. Environmental Protection Agency (EPA) is the federal agency charged with addressing most environmental-related issues. The idea of wilderness has shaped the management of public lands since 1964, with the Wilderness Act. The Endangered Species Act of 1973 provides a way to protect threatened and endangered species and their habitats. The United States Fish and Wildlife Service implements and enforces the Act. In 2024, the U.S. ranked 35th among 180 countries in the Environmental Performance Index. Government and politics The United States is a federal republic of 50 states and a federal capital district, Washington, D.C. The U.S. asserts sovereignty over five unincorporated territories and several uninhabited island possessions. It is the world's oldest surviving federation, and its presidential system of federal government has been adopted, in whole or in part, by many newly independent states worldwide following their decolonization. The Constitution of the United States serves as the country's supreme legal document. Most scholars describe the United States as a liberal democracy.[r] Composed of three branches, all headquartered in Washington, D.C., the federal government is the national government of the United States. The U.S. Constitution establishes a separation of powers intended to provide a system of checks and balances to prevent any of the three branches from becoming supreme. The three-branch system is known as the presidential system, in contrast to the parliamentary system where the executive is part of the legislative body. Many countries around the world adopted this aspect of the 1789 Constitution of the United States, especially in the postcolonial Americas. In the U.S. federal system, sovereign powers are shared between three levels of government specified in the Constitution: the federal government, the states, and Indian tribes. The U.S. also asserts sovereignty over five permanently inhabited territories: American Samoa, Guam, the Northern Mariana Islands, Puerto Rico, and the U.S. Virgin Islands. Residents of the 50 states are governed by their elected state government, under state constitutions compatible with the national constitution, and by elected local governments that are administrative divisions of a state. States are subdivided into counties or county equivalents, and (except for Hawaii) further divided into municipalities, each administered by elected representatives. The District of Columbia is a federal district containing the U.S. capital, Washington, D.C. The federal district is an administrative division of the federal government. Indian country is made up of 574 federally recognized tribes and 326 Indian reservations. They hold a government-to-government relationship with the U.S. federal government in Washington and are legally defined as domestic dependent nations with inherent tribal sovereignty rights. In addition to the five major territories, the U.S. also asserts sovereignty over the United States Minor Outlying Islands in the Pacific Ocean and the Caribbean. The seven undisputed islands without permanent populations are Baker Island, Howland Island, Jarvis Island, Johnston Atoll, Kingman Reef, Midway Atoll, and Palmyra Atoll. U.S. sovereignty over the unpopulated Bajo Nuevo Bank, Navassa Island, Serranilla Bank, and Wake Island is disputed. The Constitution is silent on political parties. However, they developed independently in the 18th century with the Federalist and Anti-Federalist parties. Since then, the United States has operated as a de facto two-party system, though the parties have changed over time. Since the mid-19th century, the two main national parties have been the Democratic Party and the Republican Party. The former is perceived as relatively liberal in its political platform while the latter is perceived as relatively conservative in its platform. The United States has an established structure of foreign relations, with the world's second-largest diplomatic corps as of 2024[update]. It is a permanent member of the United Nations Security Council and home to the United Nations headquarters. The United States is a member of the G7, G20, and OECD intergovernmental organizations. Almost all countries have embassies and many have consulates (official representatives) in the country. Likewise, nearly all countries host formal diplomatic missions with the United States, except Iran, North Korea, and Bhutan. Though Taiwan does not have formal diplomatic relations with the U.S., it maintains close unofficial relations. The United States regularly supplies Taiwan with military equipment to deter potential Chinese aggression. Its geopolitical attention also turned to the Indo-Pacific when the United States joined the Quadrilateral Security Dialogue with Australia, India, and Japan. The United States has a "Special Relationship" with the United Kingdom and strong ties with Canada, Australia, New Zealand, the Philippines, Japan, South Korea, Israel, and several European Union countries such as France, Italy, Germany, Spain, and Poland. The U.S. works closely with its NATO allies on military and national security issues, and with countries in the Americas through the Organization of American States and the United States–Mexico–Canada Free Trade Agreement. The U.S. exercises full international defense authority and responsibility for Micronesia, the Marshall Islands, and Palau through the Compact of Free Association. It has increasingly conducted strategic cooperation with India, while its ties with China have steadily deteriorated. Beginning in 2014, the U.S. had become a key ally of Ukraine. After Donald Trump was elected U.S. president in 2024, he sought to negotiate an end to the Russo-Ukrainian War. He paused all military aid to Ukraine in March 2025, although the aid resumed later. Trump also ended U.S. intelligence sharing with the country, but this too was eventually restored. The president is the commander-in-chief of the United States Armed Forces and appoints its leaders, the secretary of defense and the Joint Chiefs of Staff. The Department of Defense, headquartered at the Pentagon near Washington, D.C., administers five of the six service branches, which are made up of the U.S. Army, Marine Corps, Navy, Air Force, and Space Force. The Coast Guard is administered by the Department of Homeland Security in peacetime and can be transferred to the Department of the Navy in wartime. Total strength of the entire military is about 1.3 million active duty with an additional 400,000 in reserve. The United States spent $997 billion on its military in 2024, which is by far the largest amount of any country, making up 37% of global military spending and accounting for 3.4% of the country's GDP. The U.S. possesses 42% of the world's nuclear weapons—the second-largest stockpile after that of Russia. The U.S. military is widely regarded as the most powerful and advanced in the world. The United States has the third-largest combined armed forces in the world, behind the Chinese People's Liberation Army and Indian Armed Forces. The U.S. military operates about 800 bases and facilities abroad, and maintains deployments greater than 100 active duty personnel in 25 foreign countries. The United States has engaged in over 400 military interventions since its founding in 1776, with over half of these occurring between 1950 and 2019 and 25% occurring in the post-Cold War era. State defense forces (SDFs) are military units that operate under the sole authority of a state government. SDFs are authorized by state and federal law but are under the command of the state's governor. By contrast, the 54 U.S. National Guard organizations[t] fall under the dual control of state or territorial governments and the federal government; their units can also become federalized entities, but SDFs cannot be federalized. The National Guard personnel of a state or territory can be federalized by the president under the National Defense Act Amendments of 1933; this legislation created the Guard and provides for the integration of Army National Guard and Air National Guard units and personnel into the U.S. Army and (since 1947) the U.S. Air Force. The total number of National Guard members is about 430,000, while the estimated combined strength of SDFs is less than 10,000. There are about 18,000 U.S. police agencies from local to national level in the United States. Law in the United States is mainly enforced by local police departments and sheriff departments in their municipal or county jurisdictions. The state police departments have authority in their respective state, and federal agencies such as the Federal Bureau of Investigation (FBI) and the U.S. Marshals Service have national jurisdiction and specialized duties, such as protecting civil rights, national security, enforcing U.S. federal courts' rulings and federal laws, and interstate criminal activity. State courts conduct almost all civil and criminal trials, while federal courts adjudicate the much smaller number of civil and criminal cases that relate to federal law. There is no unified "criminal justice system" in the United States. The American prison system is largely heterogenous, with thousands of relatively independent systems operating across federal, state, local, and tribal levels. In 2025, "these systems hold nearly 2 million people in 1,566 state prisons, 98 federal prisons, 3,116 local jails, 1,277 juvenile correctional facilities, 133 immigration detention facilities, and 80 Indian country jails, as well as in military prisons, civil commitment centers, state psychiatric hospitals, and prisons in the U.S. territories." Despite disparate systems of confinement, four main institutions dominate: federal prisons, state prisons, local jails, and juvenile correctional facilities. Federal prisons are run by the Federal Bureau of Prisons and hold pretrial detainees as well as people who have been convicted of federal crimes. State prisons, run by the department of corrections of each state, hold people sentenced and serving prison time (usually longer than one year) for felony offenses. Local jails are county or municipal facilities that incarcerate defendants prior to trial; they also hold those serving short sentences (typically under a year). Juvenile correctional facilities are operated by local or state governments and serve as longer-term placements for any minor adjudicated as delinquent and ordered by a judge to be confined. In January 2023, the United States had the sixth-highest per capita incarceration rate in the world—531 people per 100,000 inhabitants—and the largest prison and jail population in the world, with more than 1.9 million people incarcerated. An analysis of the World Health Organization Mortality Database from 2010 showed U.S. homicide rates "were 7 times higher than in other high-income countries, driven by a gun homicide rate that was 25 times higher". Economy The U.S. has a highly developed mixed economy that has been the world's largest nominally since about 1890. Its 2024 gross domestic product (GDP)[e] of more than $29 trillion constituted over 25% of nominal global economic output, or 15% at purchasing power parity (PPP). From 1983 to 2008, U.S. real compounded annual GDP growth was 3.3%, compared to a 2.3% weighted average for the rest of the G7. The country ranks first in the world by nominal GDP, second when adjusted for purchasing power parities (PPP), and ninth by PPP-adjusted GDP per capita. In February 2024, the total U.S. federal government debt was $34.4 trillion. Of the world's 500 largest companies by revenue, 138 were headquartered in the U.S. in 2025, the highest number of any country. The U.S. dollar is the currency most used in international transactions and the world's foremost reserve currency, backed by the country's dominant economy, its military, the petrodollar system, its large U.S. treasuries market, and its linked eurodollar. Several countries use it as their official currency, and in others it is the de facto currency. The U.S. has free trade agreements with several countries, including the USMCA. Although the United States has reached a post-industrial level of economic development and is often described as having a service economy, it remains a major industrial power; in 2024, the U.S. manufacturing sector was the world's second-largest by value output after China's. New York City is the world's principal financial center, and its metropolitan area is the world's largest metropolitan economy. The New York Stock Exchange and Nasdaq, both located in New York City, are the world's two largest stock exchanges by market capitalization and trade volume. The United States is at the forefront of technological advancement and innovation in many economic fields, especially in artificial intelligence; electronics and computers; pharmaceuticals; and medical, aerospace and military equipment. The country's economy is fueled by abundant natural resources, a well-developed infrastructure, and high productivity. The largest trading partners of the United States are the European Union, Mexico, Canada, China, Japan, South Korea, the United Kingdom, Vietnam, India, and Taiwan. The United States is the world's largest importer and second-largest exporter.[u] It is by far the world's largest exporter of services. Americans have the highest average household and employee income among OECD member states, and the fourth-highest median household income in 2023, up from sixth-highest in 2013. With personal consumption expenditures of over $18.5 trillion in 2023, the U.S. has a heavily consumer-driven economy and is the world's largest consumer market. The U.S. ranked first in the number of dollar billionaires and millionaires in 2023, with 735 billionaires and nearly 22 million millionaires. Wealth in the United States is highly concentrated; in 2011, the richest 10% of the adult population owned 72% of the country's household wealth, while the bottom 50% owned just 2%. U.S. wealth inequality increased substantially since the late 1980s, and income inequality in the U.S. reached a record high in 2019. In 2024, the country had some of the highest wealth and income inequality levels among OECD countries. Since the 1970s, there has been a decoupling of U.S. wage gains from worker productivity. In 2016, the top fifth of earners took home more than half of all income, giving the U.S. one of the widest income distributions among OECD countries. There were about 771,480 homeless persons in the U.S. in 2024. In 2022, 6.4 million children experienced food insecurity. Feeding America estimates that around one in five, or approximately 13 million, children experience hunger in the U.S. and do not know where or when they will get their next meal. Also in 2022, about 37.9 million people, or 11.5% of the U.S. population, were living in poverty. The United States has a smaller welfare state and redistributes less income through government action than most other high-income countries. It is the only advanced economy that does not guarantee its workers paid vacation nationally and one of a few countries in the world without federal paid family leave as a legal right. The United States has a higher percentage of low-income workers than almost any other developed country, largely because of a weak collective bargaining system and lack of government support for at-risk workers. The United States has been a leader in technological innovation since the late 19th century and scientific research since the mid-20th century. Methods for producing interchangeable parts and the establishment of a machine tool industry enabled the large-scale manufacturing of U.S. consumer products in the late 19th century. By the early 20th century, factory electrification, the introduction of the assembly line, and other labor-saving techniques created the system of mass production. In the 21st century, the United States continues to be one of the world's foremost scientific powers, though China has emerged as a major competitor in many fields. The U.S. has the highest research and development expenditures of any country and ranks ninth as a percentage of GDP. In 2022, the United States was (after China) the country with the second-highest number of published scientific papers. In 2021, the U.S. ranked second (also after China) by the number of patent applications, and third by trademark and industrial design applications (after China and Germany), according to World Intellectual Property Indicators. In 2025 the United States ranked third (after Switzerland and Sweden) in the Global Innovation Index. The United States is considered to be a world leader in the development of artificial intelligence technology. In 2023, the United States was ranked the second most technologically advanced country in the world (after South Korea) by Global Finance magazine. The United States has maintained a space program since the late 1950s, beginning with the establishment of the National Aeronautics and Space Administration (NASA) in 1958. NASA's Apollo program (1961–1972) achieved the first crewed Moon landing with the 1969 Apollo 11 mission; it remains one of the agency's most significant milestones. Other major endeavors by NASA include the Space Shuttle program (1981–2011), the Voyager program (1972–present), the Hubble and James Webb space telescopes (launched in 1990 and 2021, respectively), and the multi-mission Mars Exploration Program (Spirit and Opportunity, Curiosity, and Perseverance). NASA is one of five agencies collaborating on the International Space Station (ISS); U.S. contributions to the ISS include several modules, including Destiny (2001), Harmony (2007), and Tranquility (2010), as well as ongoing logistical and operational support. The United States private sector dominates the global commercial spaceflight industry. Prominent American spaceflight contractors include Blue Origin, Boeing, Lockheed Martin, Northrop Grumman, and SpaceX. NASA programs such as the Commercial Crew Program, Commercial Resupply Services, Commercial Lunar Payload Services, and NextSTEP have facilitated growing private-sector involvement in American spaceflight. In 2023, the United States received approximately 84% of its energy from fossil fuel, and its largest source of energy was petroleum (38%), followed by natural gas (36%), renewable sources (9%), coal (9%), and nuclear power (9%). In 2022, the United States constituted about 4% of the world's population, but consumed around 16% of the world's energy. The U.S. ranks as the second-highest emitter of greenhouse gases behind China. The U.S. is the world's largest producer of nuclear power, generating around 30% of the world's nuclear electricity. It also has the highest number of nuclear power reactors of any country. From 2024, the U.S. plans to triple its nuclear power capacity by 2050. The United States' 4 million miles (6.4 million kilometers) of road network, owned almost entirely by state and local governments, is the longest in the world. The extensive Interstate Highway System that connects all major U.S. cities is funded mostly by the federal government but maintained by state departments of transportation. The system is further extended by state highways and some private toll roads. The U.S. is among the top ten countries with the highest vehicle ownership per capita (850 vehicles per 1,000 people) in 2022. A 2022 study found that 76% of U.S. commuters drive alone and 14% ride a bicycle, including bike owners and users of bike-sharing networks. About 11% use some form of public transportation. Public transportation in the United States is well developed in the largest urban areas, notably New York City, Washington, D.C., Boston, Philadelphia, Chicago, and San Francisco; otherwise, coverage is generally less extensive than in most other developed countries. The U.S. also has many relatively car-dependent localities. Long-distance intercity travel is provided primarily by airlines, but travel by rail is more common along the Northeast Corridor, the only high-speed rail in the U.S. that meets international standards. Amtrak, the country's government-sponsored national passenger rail company, has a relatively sparse network compared to that of Western European countries. Service is concentrated in the Northeast, California, the Midwest, the Pacific Northwest, and Virginia/Southeast. The United States has an extensive air transportation network. U.S. civilian airlines are all privately owned. The three largest airlines in the world, by total number of passengers carried, are U.S.-based; American Airlines became the global leader after its 2013 merger with US Airways. Of the 50 busiest airports in the world, 16 are in the United States, as well as five of the top 10. The world's busiest airport by passenger volume is Hartsfield–Jackson Atlanta International in Atlanta, Georgia. In 2022, most of the 19,969 U.S. airports were owned and operated by local government authorities, and there are also some private airports. Some 5,193 are designated as "public use", including for general aviation. The Transportation Security Administration (TSA) has provided security at most major airports since 2001. The country's rail transport network, the longest in the world at 182,412.3 mi (293,564.2 km), handles mostly freight (in contrast to more passenger-centered rail in Europe). Because they are often privately owned operations, U.S. railroads lag behind those of the rest of the world in terms of electrification. The country's inland waterways are the world's fifth-longest, totaling 25,482 mi (41,009 km). They are used extensively for freight, recreation, and a small amount of passenger traffic. Of the world's 50 busiest container ports, four are located in the United States, with the busiest in the country being the Port of Los Angeles. Demographics The U.S. Census Bureau reported 331,449,281 residents on April 1, 2020,[v] making the United States the third-most-populous country in the world, after India and China. The Census Bureau's official 2025 population estimate was 341,784,857, an increase of 3.1% since the 2020 census. According to the Bureau's U.S. Population Clock, on July 1, 2024, the U.S. population had a net gain of one person every 16 seconds, or about 5400 people per day. In 2023, 51% of Americans age 15 and over were married, 6% were widowed, 10% were divorced, and 34% had never been married. In 2023, the total fertility rate for the U.S. stood at 1.6 children per woman, and, at 23%, it had the world's highest rate of children living in single-parent households in 2019. Most Americans live in the suburbs of major metropolitan areas. The United States has a diverse population; 37 ancestry groups have more than one million members. White Americans with ancestry from Europe, the Middle East, or North Africa form the largest racial and ethnic group at 57.8% of the United States population. Hispanic and Latino Americans form the second-largest group and are 18.7% of the United States population. African Americans constitute the country's third-largest ancestry group and are 12.1% of the total U.S. population. Asian Americans are the country's fourth-largest group, composing 5.9% of the United States population. The country's 3.7 million Native Americans account for about 1%, and some 574 native tribes are recognized by the federal government. In 2024, the median age of the United States population was 39.1 years. While many languages and dialects are spoken in the United States, English is by far the most commonly spoken and written. De facto, English is the official language of the United States, and in 2025, Executive Order 14224 declared English official. However, the U.S. has never had a de jure official language, as Congress has never passed a law to designate English as official for all three federal branches. Some laws, such as U.S. naturalization requirements, nonetheless standardize English. Twenty-eight states and the United States Virgin Islands have laws that designate English as the sole official language; 19 states and the District of Columbia have no official language. Three states and four U.S. territories have recognized local or indigenous languages in addition to English: Hawaii (Hawaiian), Alaska (twenty Native languages),[w] South Dakota (Sioux), American Samoa (Samoan), Puerto Rico (Spanish), Guam (Chamorro), and the Northern Mariana Islands (Carolinian and Chamorro). In total, 169 Native American languages are spoken in the United States. In Puerto Rico, Spanish is more widely spoken than English. According to the American Community Survey (2020), some 245.4 million people in the U.S. age five and older spoke only English at home. About 41.2 million spoke Spanish at home, making it the second most commonly used language. Other languages spoken at home by one million people or more include Chinese (3.40 million), Tagalog (1.71 million), Vietnamese (1.52 million), Arabic (1.39 million), French (1.18 million), Korean (1.07 million), and Russian (1.04 million). German, spoken by 1 million people at home in 2010, fell to 857,000 total speakers in 2020. America's immigrant population is by far the world's largest in absolute terms. In 2022, there were 87.7 million immigrants and U.S.-born children of immigrants in the United States, accounting for nearly 27% of the overall U.S. population. In 2017, out of the U.S. foreign-born population, some 45% (20.7 million) were naturalized citizens, 27% (12.3 million) were lawful permanent residents, 6% (2.2 million) were temporary lawful residents, and 23% (10.5 million) were unauthorized immigrants. In 2019, the top countries of origin for immigrants were Mexico (24% of immigrants), India (6%), China (5%), the Philippines (4.5%), and El Salvador (3%). In fiscal year 2022, over one million immigrants (most of whom entered through family reunification) were granted legal residence. The undocumented immigrant population in the U.S. reached a record high of 14 million in 2023. The First Amendment guarantees the free exercise of religion in the country and forbids Congress from passing laws respecting its establishment. Religious practice is widespread, among the most diverse in the world, and profoundly vibrant. The country has the world's largest Christian population, which includes the fourth-largest population of Catholics. Other notable faiths include Judaism, Buddhism, Hinduism, Islam, New Age, and Native American religions. Religious practice varies significantly by region. "Ceremonial deism" is common in American culture. The overwhelming majority of Americans believe in a higher power or spiritual force, engage in spiritual practices such as prayer, and consider themselves religious or spiritual. In the Southern United States' "Bible Belt", evangelical Protestantism plays a significant role culturally; New England and the Western United States tend to be more secular. Mormonism, a Restorationist movement founded in the U.S. in 1847, is the predominant religion in Utah and a major religion in Idaho. About 82% of Americans live in metropolitan areas, particularly in suburbs; about half of those reside in cities with populations over 50,000. In 2022, 333 incorporated municipalities had populations over 100,000, nine cities had more than one million residents, and four cities—New York City, Los Angeles, Chicago, and Houston—had populations exceeding two million. Many U.S. metropolitan populations are growing rapidly, particularly in the South and West. According to the Centers for Disease Control and Prevention (CDC), average U.S. life expectancy at birth reached 79.0 years in 2024, its highest recorded level. This was an increase of 0.6 years over 2023. The CDC attributed the improvement to a significant fall in the number of fatal drug overdoses in the country, noting that "heart disease continues to be the leading cause of death in the United States, followed by cancer and unintentional injuries." In 2024, life expectancy at birth for American men rose to 76.5 years (+0.7 years compared to 2023), while life expectancy for women was 81.4 years (+0.3 years). Starting in 1998, life expectancy in the U.S. fell behind that of other wealthy industrialized countries, and Americans' "health disadvantage" gap has been increasing ever since. The Commonwealth Fund reported in 2020 that the U.S. had the highest suicide rate among high-income countries. Approximately one-third of the U.S. adult population is obese and another third is overweight. The U.S. healthcare system far outspends that of any other country, measured both in per capita spending and as a percentage of GDP, but attains worse healthcare outcomes when compared to peer countries for reasons that are debated. The United States is the only developed country without a system of universal healthcare, and a significant proportion of the population that does not carry health insurance. Government-funded healthcare coverage for the poor (Medicaid) and for those age 65 and older (Medicare) is available to Americans who meet the programs' income or age qualifications. In 2010, then-President Obama passed the Patient Protection and Affordable Care Act.[x] Abortion in the United States is not federally protected, and is illegal or restricted in 17 states. American primary and secondary education, known in the U.S. as K–12 ("kindergarten through 12th grade"), is decentralized. School systems are operated by state, territorial, and sometimes municipal governments and regulated by the U.S. Department of Education. In general, children are required to attend school or an approved homeschool from the age of five or six (kindergarten or first grade) until they are 18 years old. This often brings students through the 12th grade, the final year of a U.S. high school, but some states and territories allow them to leave school earlier, at age 16 or 17. The U.S. spends more on education per student than any other country, an average of $18,614 per year per public elementary and secondary school student in 2020–2021. Among Americans age 25 and older, 92.2% graduated from high school, 62.7% attended some college, 37.7% earned a bachelor's degree, and 14.2% earned a graduate degree. The U.S. literacy rate is near-universal. The U.S. has produced the most Nobel Prize winners of any country, with 411 (having won 413 awards). U.S. tertiary or higher education has earned a global reputation. Many of the world's top universities, as listed by various ranking organizations, are in the United States, including 19 of the top 25. American higher education is dominated by state university systems, although the country's many private universities and colleges enroll about 20% of all American students. Local community colleges generally offer open admissions, lower tuition, and coursework leading to a two-year associate degree or a non-degree certificate. As for public expenditures on higher education, the U.S. spends more per student than the OECD average, and Americans spend more than all nations in combined public and private spending. Colleges and universities directly funded by the federal government do not charge tuition and are limited to military personnel and government employees, including: the U.S. service academies, the Naval Postgraduate School, and military staff colleges. Despite some student loan forgiveness programs in place, student loan debt increased by 102% between 2010 and 2020, and exceeded $1.7 trillion in 2022. Culture and society The United States is home to a wide variety of ethnic groups, traditions, and customs. The country has been described as having the values of individualism and personal autonomy, as well as a strong work ethic and competitiveness. Voluntary altruism towards others also plays a major role; according to a 2016 study by the Charities Aid Foundation, Americans donated 1.44% of total GDP to charity—the highest rate in the world by a large margin. Americans have traditionally been characterized by a unifying political belief in an "American Creed" emphasizing consent of the governed, liberty, equality under the law, democracy, social equality, property rights, and a preference for limited government. The U.S. has acquired significant hard and soft power through its diplomatic influence, economic power, military alliances, and cultural exports such as American movies, music, video games, sports, and food. The influence that the United States exerts on other countries through soft power is referred to as Americanization. Nearly all present Americans or their ancestors came from Europe, Africa, or Asia (the "Old World") within the past five centuries. Mainstream American culture is a Western culture largely derived from the traditions of European immigrants with influences from many other sources, such as traditions brought by slaves from Africa. More recent immigration from Asia and especially Latin America has added to a cultural mix that has been described as a homogenizing melting pot, and a heterogeneous salad bowl, with immigrants contributing to, and often assimilating into, mainstream American culture. Under the First Amendment to the Constitution, the United States is considered to have the strongest protections of free speech of any country. Flag desecration, hate speech, blasphemy, and lese majesty are all forms of protected expression. A 2016 Pew Research Center poll found that Americans were the most supportive of free expression of any polity measured. Additionally, they are the "most supportive of freedom of the press and the right to use the Internet without government censorship". The U.S. is a socially progressive country with permissive attitudes surrounding human sexuality. LGBTQ rights in the United States are among the most advanced by global standards. The American Dream, or the perception that Americans enjoy high levels of social mobility, plays a key role in attracting immigrants. Whether this perception is accurate has been a topic of debate. While mainstream culture holds that the United States is a classless society, scholars identify significant differences between the country's social classes, affecting socialization, language, and values. Americans tend to greatly value socioeconomic achievement, but being ordinary or average is promoted by some as a noble condition as well. The National Foundation on the Arts and the Humanities is an agency of the United States federal government that was established in 1965 with the purpose to "develop and promote a broadly conceived national policy of support for the humanities and the arts in the United States, and for institutions which preserve the cultural heritage of the United States." It is composed of four sub-agencies: Colonial American authors were influenced by John Locke and other Enlightenment philosophers. The American Revolutionary Period (1765–1783) is notable for the political writings of Benjamin Franklin, Alexander Hamilton, Thomas Paine, and Thomas Jefferson. Shortly before and after the Revolutionary War, the newspaper rose to prominence, filling a demand for anti-British national literature. An early novel is William Hill Brown's The Power of Sympathy, published in 1791. Writer and critic John Neal in the early- to mid-19th century helped advance America toward a unique literature and culture by criticizing predecessors such as Washington Irving for imitating their British counterparts, and by influencing writers such as Edgar Allan Poe, who took American poetry and short fiction in new directions. Ralph Waldo Emerson and Margaret Fuller pioneered the influential Transcendentalism movement; Henry David Thoreau, author of Walden, was influenced by this movement. The conflict surrounding abolitionism inspired writers, like Harriet Beecher Stowe, and authors of slave narratives, such as Frederick Douglass. Nathaniel Hawthorne's The Scarlet Letter (1850) explored the dark side of American history, as did Herman Melville's Moby-Dick (1851). Major American poets of the 19th century American Renaissance include Walt Whitman, Melville, and Emily Dickinson. Mark Twain was the first major American writer to be born in the West. Henry James achieved international recognition with novels like The Portrait of a Lady (1881). As literacy rates rose, periodicals published more stories centered around industrial workers, women, and the rural poor. Naturalism, regionalism, and realism were the major literary movements of the period. While modernism generally took on an international character, modernist authors working within the United States more often rooted their work in specific regions, peoples, and cultures. Following the Great Migration to northern cities, African-American and black West Indian authors of the Harlem Renaissance developed an independent tradition of literature that rebuked a history of inequality and celebrated black culture. An important cultural export during the Jazz Age, these writings were a key influence on Négritude, a philosophy emerging in the 1930s among francophone writers of the African diaspora. In the 1950s, an ideal of homogeneity led many authors to attempt to write the Great American Novel, while the Beat Generation rejected this conformity, using styles that elevated the impact of the spoken word over mechanics to describe drug use, sexuality, and the failings of society. Contemporary literature is more pluralistic than in previous eras, with the closest thing to a unifying feature being a trend toward self-conscious experiments with language. Twelve American laureates have won the Nobel Prize in Literature. Media in the United States is broadly uncensored, with the First Amendment providing significant protections, as reiterated in New York Times Co. v. United States. The four major broadcasters in the U.S. are the National Broadcasting Company (NBC), Columbia Broadcasting System (CBS), American Broadcasting Company (ABC), and Fox Broadcasting Company (Fox). The four major broadcast television networks are all commercial entities. The U.S. cable television system offers hundreds of channels catering to a variety of niches. In 2021, about 83% of Americans over age 12 listened to broadcast radio, while about 40% listened to podcasts. In the prior year, there were 15,460 licensed full-power radio stations in the U.S. according to the Federal Communications Commission (FCC). Much of the public radio broadcasting is supplied by National Public Radio (NPR), incorporated in February 1970 under the Public Broadcasting Act of 1967. U.S. newspapers with a global reach and reputation include The Wall Street Journal, The New York Times, The Washington Post, and USA Today. About 800 publications are produced in Spanish. With few exceptions, newspapers are privately owned, either by large chains such as Gannett or McClatchy, which own dozens or even hundreds of newspapers; by small chains that own a handful of papers; or, in an increasingly rare situation, by individuals or families. Major cities often have alternative newspapers to complement the mainstream daily papers, such as The Village Voice in New York City and LA Weekly in Los Angeles. The five most-visited websites in the world are Google, YouTube, Facebook, Instagram, and ChatGPT—all of them American-owned. Other popular platforms used include X (formerly Twitter) and Amazon. In 2025, the U.S. was the world's second-largest video game market by revenue (after China). In 2015, the U.S. video game industry consisted of 2,457 companies that employed around 220,000 jobs and generated $30.4 billion in revenue. There are 444 game publishers, developers, and hardware companies in California alone. According to the Game Developers Conference (GDC), the U.S. is the top location for video game development, with 58% of the world's game developers based there in 2025. The United States is well known for its theater. Mainstream theater in the United States derives from the old European theatrical tradition and has been heavily influenced by the British theater. By the middle of the 19th century, America had created new distinct dramatic forms in the Tom Shows, the showboat theater and the minstrel show. The central hub of the American theater scene is the Theater District in Manhattan, with its divisions of Broadway, off-Broadway, and off-off-Broadway. Many movie and television celebrities have gotten their big break working in New York productions. Outside New York City, many cities have professional regional or resident theater companies that produce their own seasons. The biggest-budget theatrical productions are musicals. U.S. theater has an active community theater culture. The Tony Awards recognizes excellence in live Broadway theater and are presented at an annual ceremony in Manhattan. The awards are given for Broadway productions and performances. One is also given for regional theater. Several discretionary non-competitive awards are given as well, including a Special Tony Award, the Tony Honors for Excellence in Theatre, and the Isabelle Stevenson Award. Folk art in colonial America grew out of artisanal craftsmanship in communities that allowed commonly trained people to individually express themselves. It was distinct from Europe's tradition of high art, which was less accessible and generally less relevant to early American settlers. Cultural movements in art and craftsmanship in colonial America generally lagged behind those of Western Europe. For example, the prevailing medieval style of woodworking and primitive sculpture became integral to early American folk art, despite the emergence of Renaissance styles in England in the late 16th and early 17th centuries. The new English styles would have been early enough to make a considerable impact on American folk art, but American styles and forms had already been firmly adopted. Not only did styles change slowly in early America, but there was a tendency for rural artisans there to continue their traditional forms longer than their urban counterparts did—and far longer than those in Western Europe. The Hudson River School was a mid-19th-century movement in the visual arts tradition of European naturalism. The 1913 Armory Show in New York City, an exhibition of European modernist art, shocked the public and transformed the U.S. art scene. American Realism and American Regionalism sought to reflect and give America new ways of looking at itself. Georgia O'Keeffe, Marsden Hartley, and others experimented with new and individualistic styles, which would become known as American modernism. Major artistic movements such as the abstract expressionism of Jackson Pollock and Willem de Kooning and the pop art of Andy Warhol and Roy Lichtenstein developed largely in the United States. Major photographers include Alfred Stieglitz, Edward Steichen, Dorothea Lange, Edward Weston, James Van Der Zee, Ansel Adams, and Gordon Parks. The tide of modernism and then postmodernism has brought global fame to American architects, including Frank Lloyd Wright, Philip Johnson, and Frank Gehry. The Metropolitan Museum of Art in Manhattan is the largest art museum in the United States and the fourth-largest in the world. American folk music encompasses numerous music genres, variously known as traditional music, traditional folk music, contemporary folk music, or roots music. Many traditional songs have been sung within the same family or folk group for generations, and sometimes trace back to such origins as the British Isles, mainland Europe, or Africa. The rhythmic and lyrical styles of African-American music in particular have influenced American music. Banjos were brought to America through the slave trade. Minstrel shows incorporating the instrument into their acts led to its increased popularity and widespread production in the 19th century. The electric guitar, first invented in the 1930s, and mass-produced by the 1940s, had an enormous influence on popular music, in particular due to the development of rock and roll. The synthesizer, turntablism, and electronic music were also largely developed in the U.S. Elements from folk idioms such as the blues and old-time music were adopted and transformed into popular genres with global audiences. Jazz grew from blues and ragtime in the early 20th century, developing from the innovations and recordings of composers such as W.C. Handy and Jelly Roll Morton. Louis Armstrong and Duke Ellington increased its popularity early in the 20th century. Country music developed in the 1920s, bluegrass and rhythm and blues in the 1940s, and rock and roll in the 1950s. In the 1960s, Bob Dylan emerged from the folk revival to become one of the country's most celebrated songwriters. The musical forms of punk and hip hop both originated in the United States in the 1970s. The United States has the world's largest music market, with a total retail value of $15.9 billion in 2022. Most of the world's major record companies are based in the U.S.; they are represented by the Recording Industry Association of America (RIAA). Mid-20th-century American pop stars, such as Frank Sinatra and Elvis Presley, became global celebrities and best-selling music artists, as have artists of the late 20th century, such as Michael Jackson, Madonna, Whitney Houston, and Mariah Carey, and of the early 21st century, such as Eminem, Britney Spears, Lady Gaga, Katy Perry, Taylor Swift and Beyoncé. The United States has the world's largest apparel market by revenue. Apart from professional business attire, American fashion is eclectic and predominantly informal. Americans' diverse cultural roots are reflected in their clothing; however, sneakers, jeans, T-shirts, and baseball caps are emblematic of American styles. New York, with its Fashion Week, is considered to be one of the "Big Four" global fashion capitals, along with Paris, Milan, and London. A study demonstrated that general proximity to Manhattan's Garment District has been synonymous with American fashion since its inception in the early 20th century. A number of well-known designer labels, among them Tommy Hilfiger, Ralph Lauren, Tom Ford and Calvin Klein, are headquartered in Manhattan. Labels cater to niche markets, such as preteens. New York Fashion Week is one of the most influential fashion shows in the world, and is held twice each year in Manhattan; the annual Met Gala, also in Manhattan, has been called the fashion world's "biggest night". The U.S. film industry has a worldwide influence and following. Hollywood, a district in central Los Angeles, the nation's second-most populous city, is also metonymous for the American filmmaking industry. The major film studios of the United States are the primary source of the most commercially successful movies selling the most tickets in the world. Largely centered in the New York City region from its beginnings in the late 19th century through the first decades of the 20th century, the U.S. film industry has since been primarily based in and around Hollywood. Nonetheless, American film companies have been subject to the forces of globalization in the 21st century, and an increasing number of films are made elsewhere. The Academy Awards, popularly known as "the Oscars", have been held annually by the Academy of Motion Picture Arts and Sciences since 1929, and the Golden Globe Awards have been held annually since January 1944. The industry peaked in what is commonly referred to as the "Golden Age of Hollywood", from the early sound period until the early 1960s, with screen actors such as John Wayne and Marilyn Monroe becoming iconic figures. In the 1970s, "New Hollywood", or the "Hollywood Renaissance", was defined by grittier films influenced by French and Italian realist pictures of the post-war period. The 21st century has been marked by the rise of American streaming platforms, which came to rival traditional cinema. Early settlers were introduced by Native Americans to foods such as turkey, sweet potatoes, corn, squash, and maple syrup. Of the most enduring and pervasive examples are variations of the native dish called succotash. Early settlers and later immigrants combined these with foods they were familiar with, such as wheat flour, beef, and milk, to create a distinctive American cuisine. New World crops, especially pumpkin, corn, potatoes, and turkey as the main course are part of a shared national menu on Thanksgiving, when many Americans prepare or purchase traditional dishes to celebrate the occasion. Characteristic American dishes such as apple pie, fried chicken, doughnuts, french fries, macaroni and cheese, ice cream, hamburgers, hot dogs, and American pizza derive from the recipes of various immigrant groups. Mexican dishes such as burritos and tacos preexisted the United States in areas later annexed from Mexico, and adaptations of Chinese cuisine as well as pasta dishes freely adapted from Italian sources are all widely consumed. American chefs have had a significant impact on society both domestically and internationally. In 1946, the Culinary Institute of America was founded by Katharine Angell and Frances Roth. This would become the United States' most prestigious culinary school, where many of the most talented American chefs would study prior to successful careers. The United States restaurant industry was projected at $899 billion in sales for 2020, and employed more than 15 million people, representing 10% of the nation's workforce directly. It is the country's second-largest private employer and the third-largest employer overall. The United States is home to over 220 Michelin star-rated restaurants, 70 of which are in New York City. Wine has been produced in what is now the United States since the 1500s, with the first widespread production beginning in what is now New Mexico in 1628. In the modern U.S., wine production is undertaken in all fifty states, with California producing 84 percent of all U.S. wine. With more than 1,100,000 acres (4,500 km2) under vine, the United States is the fourth-largest wine-producing country in the world, after Italy, Spain, and France. The classic American diner, a casual restaurant type originally intended for the working class, emerged during the 19th century from converted railroad dining cars made stationary. The diner soon evolved into purpose-built structures whose number expanded greatly in the 20th century. The American fast-food industry developed alongside the nation's car culture. American restaurants developed the drive-in format in the 1920s, which they began to replace with the drive-through format by the 1940s. American fast-food restaurant chains, such as McDonald's, Burger King, Chick-fil-A, Kentucky Fried Chicken, Dunkin' Donuts and many others, have numerous outlets around the world. The most popular spectator sports in the U.S. are American football, basketball, baseball, soccer, and ice hockey. Their premier leagues are, respectively, the National Football League, the National Basketball Association, Major League Baseball, Major League Soccer, and the National Hockey League, All these leagues enjoy wide-ranging domestic media coverage and, except for the MLS, all are considered the preeminent leagues in their respective sports in the world. While most major U.S. sports such as baseball and American football have evolved out of European practices, basketball, volleyball, skateboarding, and snowboarding are American inventions, many of which have become popular worldwide. Lacrosse and surfing arose from Native American and Native Hawaiian activities that predate European contact. The market for professional sports in the United States was approximately $69 billion in July 2013, roughly 50% larger than that of Europe, the Middle East, and Africa combined. American football is by several measures the most popular spectator sport in the United States. Although American football does not have a substantial following in other nations, the NFL does have the highest average attendance (67,254) of any professional sports league in the world. In the year 2024, the NFL generated over $23 billion, making them the most valued professional sports league in the United States and the world. Baseball has been regarded as the U.S. "national sport" since the late 19th century. The most-watched individual sports in the U.S. are golf and auto racing, particularly NASCAR and IndyCar. On the collegiate level, earnings for the member institutions exceed $1 billion annually, and college football and basketball attract large audiences, as the NCAA March Madness tournament and the College Football Playoff are some of the most watched national sporting events. In the U.S., the intercollegiate sports level serves as the main feeder system for professional and Olympic sports, with significant exceptions such as Minor League Baseball. This differs greatly from practices in nearly all other countries, where publicly and privately funded sports organizations serve this function. Eight Olympic Games have taken place in the United States. The 1904 Summer Olympics in St. Louis, Missouri, were the first-ever Olympic Games held outside of Europe. The Olympic Games will be held in the U.S. for a ninth time when Los Angeles hosts the 2028 Summer Olympics. U.S. athletes have won a total of 2,968 medals (1,179 gold) at the Olympic Games, the most of any country. In other international competition, the United States is the home of a number of prestigious events, including the America's Cup, World Baseball Classic, the U.S. Open, and the Masters Tournament. The U.S. men's national soccer team has qualified for eleven World Cups, while the women's national team has won the FIFA Women's World Cup and Olympic soccer tournament four and five times, respectively. The 1999 FIFA Women's World Cup was hosted by the United States. Its final match was attended by 90,185, setting the world record for largest women's sporting event crowd at the time. The United States hosted the 1994 FIFA World Cup and will co-host, along with Canada and Mexico, the 2026 FIFA World Cup. See also Notes References This article incorporates text from a free content work. Licensed under CC BY-SA IGO 3.0 (license statement/permission). Text taken from World Food and Agriculture – Statistical Yearbook 2023​, FAO, FAO. External links 40°N 100°W / 40°N 100°W / 40; -100 (United States of America)
========================================
[SOURCE: https://en.wikipedia.org/wiki/LEO_computer] | [TOKENS: 1904]
Contents LEO (computer) The LEO (Lyons Electronic Office) was a series of early computer systems created by J. Lyons and Co. The first in the series, the LEO I, was the first computer used for commercial business applications. The prototype LEO I was modelled closely on the Cambridge EDSAC. Its construction was overseen by Oliver Standingford, Raymond Thompson and David Caminer of J. Lyons and Co. LEO I ran its first business application in 1951. In 1954 Lyons formed LEO Computers Ltd to market LEO I and its successors LEO II and LEO III to other companies. LEO Computers eventually became part of English Electric Company (EEL), (EELM), then English Electric Computers (EEC), where the same team developed the faster LEO 360 and even faster LEO 326 models. It then passed to International Computers Limited (ICL) and ultimately Fujitsu. LEO series computers were still in use until 1981. Origins and initial design J. Lyons and Co. was one of the UK's leading catering and food manufacturing companies in the first half of the 20th century. In 1947, two of its senior managers, Oliver Standingford and Raymond Thompson, were sent to the United States to look at new business methods developed during World War II. During the visit, they met Herman Goldstine who was one of the original developers of ENIAC, the first general-purpose electronic computer. Standingford and Thompson saw the potential of computers to help solve the problem of administering a major business enterprise. They also learned from Goldstine that, back in the UK, Douglas Hartree and Maurice Wilkes were actually building another such machine, the pioneering EDSAC computer, at the University of Cambridge. On their return to the UK, Standingford and Thompson visited Hartree and Wilkes in Cambridge and were favourably impressed with their technical expertise and vision. Hartree and Wilkes estimated that EDSAC was 12–18 months from completion, but said that this interval could be shortened by additional funding. Standingford and Thompson wrote a report to the Lyons' board recommending that Lyons should acquire or build a computer to meet their business needs. The board agreed that, as a first step, Lyons would provide Hartree and Wilkes with £2,500 for the EDSAC project, and would also provide them with the services of a Lyons electrical engineer, Ernest Lenaerts. EDSAC was completed and ran its first program in May 1949. Following the successful completion of EDSAC, the Lyons board agreed to start the construction of their own machine, expanding on the EDSAC design. The LEO computer room, which took up around 2,500 square feet of floor space, was at Cadby Hall in Hammersmith. The Lyons machine was christened Lyons Electronic Office, or LEO. On the recommendation of Wilkes, Lyons recruited John Pinkerton, a radar engineer and research student at Cambridge, as team leader for the project. Lenaerts returned to Lyons to work on the project, and Wilkes provided training for Lyons' engineer Derek Hemy, who would be responsible for writing LEO's programs. On 15 February 1951 the computer, carrying out a simple test program, was shown to HRH Princess Elizabeth. The first business application to be run on LEO was Bakery Valuations, which computed the costs of ingredients used in bread and cakes. This was successfully run on 5 September 1951, and LEO took over Bakery Valuations calculations completely on 29–30 November 1951. Mary Coombs was employed in 1952 as the first female programmer to work on LEO, and as such she is recognized as the first female commercial programmer. Five files of archive material on the LEO Computer patent are held at the British Library and can be accessed through the British Library Archives catalogue. Design LEO I's clock speed was 500 kHz, with most instructions taking about 1.5 ms to execute. To be useful for business applications, the computer had to be able to handle a number of data streams, input and output, simultaneously. Therefore, its chief designer, John Pinkerton, designed the machine to have multiple input/output buffers. In the first instance, these were linked to fast paper tape readers and punches, fast punched card readers and punches, and a 100-line-per-minute tabulator. Later, other devices, including magnetic tape, were added. Its ultrasonic delay-line memory based on tanks of mercury, with 2K (2048) 35-bit words (i.e., 83⁄4 kilobytes), was four times as large as that of EDSAC. The systems analysis was carried out by David Caminer. Applications and successors Lyons used LEO I initially for valuation jobs, but its role was extended to include payroll, inventory, and so on. One of its early tasks was the elaboration of daily orders, which were phoned in every afternoon by the shops and used to calculate the overnight production requirements, assembly instructions, delivery schedules, invoices, costings, and management reports. This was the first instance of an integrated management information system. The LEO project was also a pioneer in outsourcing: in 1956, Lyons started doing the payroll calculations for Ford UK and others on the LEO I machine. The success of this led to the company dedicating one of its LEO II machines to bureau services. Later, the system was used for scientific computations as well. Met Office staff used a LEO I before the Met Office bought its own computer, a Ferranti Mercury, in 1959. In 1954, with the decision to proceed with LEO II and interest from other commercial companies, Lyons formed LEO Computers Ltd. The first LEO III was completed in 1961; it was a solid-state machine with a 13.2 μs cycle time ferrite core memory. It was microprogrammed and was controlled by a multitasking "Master program" operating system, which allowed concurrent running of as many as 12 application programs. Users of LEO computers programmed in two coding languages: Intercode, a low-level assembler type language; and CLEO (acronym: Clear Language for Expressing Orders), the COBOL equivalent. One of the features that LEO III shared with many computers of the day was a loudspeaker connected to the central processor via a divide-by-100 circuit and an amplifier which enabled operators to tell whether a program was looping by the distinctive sound it made. Another quirk was that many intermittent faults were due to faulty connectors and could be temporarily fixed by briskly strumming the card handles.[citation needed] Some LEO III machines purchased in the mid-to-late 1960s remained in commercial use at GPO Telephones, the forerunner of British Telecom, until 1981, primarily producing telephone bills. They were kept running using parts from redundant LEOs purchased by the GPO.[citation needed] Fate and legacy In 1963, LEO Computers Ltd was merged into English Electric Company and this led to the breaking up of the team that had inspired LEO computers. The company continued to build the LEO III, and went on to build the faster LEO 360 and even faster LEO 326 models, which had been designed by the LEO team before the takeover. English Electric LEO Computers (EEL) (1963), then English Electric Leo Marconi (EELM) (1964), later English Electric Computers (EEC) (1967), eventually merged with International Computers and Tabulators (ICT) and others to form International Computers Limited (ICL) in 1968. In the 1980s, there were still ICL 2900 mainframes running LEO programs, using an emulator written in ICL 2960 microcode at the Dalkeith development centre. At least one modern emulator has been developed which can run some original LEO III software on a modern server. ICL was bought by Fujitsu in 1990. Whether its investment in LEO actually benefited J. Lyons is unclear. Nick Pelling notes that before LEO I the company already had a proven, industry-leading system using clerks that gave it "near-real-time management information on more or less all aspects of its business", and that no jobs were lost when the system was computerized. In addition, LEO Computers lost money on many of its sales because of unrealistically low prices. In 2018, the Centre for Computing History along with LEO Computers Society were awarded funding from the Heritage Lottery Fund for their project aiming to bring together, preserve, archive and digitise a range of LEO Computers artefacts, and documents. The Centre's museum gallery has an area dedicated to LEO, and as of 2021[update] they are also working on a LEO virtual reality project. In November 2021, to coincide with the 70th anniversary of the first successful full program run on LEO I, the project released a film about the history of LEO, which went on to win Video of the Year in the Association of British Science Writers Awards in July 2022. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Tokimeki_Memorial] | [TOKENS: 437]
Contents Tokimeki Memorial Tokimeki Memorial (Japanese: ときめきメモリアル, Hepburn: Tokimeki Memoriaru; lit. "Heartbeat Memorial") is a dating simulation series by Konami. It consists of eight main games in addition to many spin-offs. The games are notable in the dating sim genre for being highly nonlinear. Their nickname amongst their fans is the contraction TokiMemo. The gameplay in Tokimeki Memorial focuses on scheduling, dating, and stat-building. The player has limited time to allocate between asking out members of the opposite sex and developing the playable character's abilities at school and sport (with the long-term goal of becoming more seductive, not out of any intrinsic value). Dates are frequent but very brief, with usually only one multiple-choice question to determine whether the partner's love meter will increase or decrease. One playthrough lasts for a fixed period of three years of high school (on the order of 5–10 hours of play), at the end of which the character with the highest love meter confesses their love. Game list Related media Several manga based on the series have been released: In 1999, a two-episode anime OVA based on the first game was produced by Studio Pierrot. In 2006, Tokimeki Memorial Online was adapted into a 25-episode anime television series, Tokimeki Memorial Only Love, produced by Konami Digital Entertainment Co., Ltd. and Anime International Company (AIC), which premiered across Japan on October 2, 2006. In 2009, an OVA adaptation of Tokimeki Memorial 4 called Hajimari no Finder, directed by Hanyuu Naoyasu, was produced by Asahi Production. Tokimeki Memorial is a live-action film loosely inspired by Konami's long running dating simulator franchise of the same name. It was released on August 9, 1997, and was distributed by Toei. It is the final film produced for Fuji TV's "Our movie series". Spin-offs and merchandise Tokimeki Memorial spin-offs and merchandise include: See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Orion_(constellation)#cite_note-54] | [TOKENS: 4993]
Contents Orion (constellation) Orion is a prominent set of stars visible during winter in the northern celestial hemisphere. It is one of the 88 modern constellations; it was among the 48 constellations listed by the 2nd-century AD/CE astronomer Ptolemy. It is named after a hunter in Greek mythology. Orion is most prominent during winter evenings in the Northern Hemisphere, as are five other constellations that have stars in the Winter Hexagon asterism. Orion's two brightest stars, Rigel (β) and Betelgeuse (α), are both among the brightest stars in the night sky; both are supergiants and slightly variable. There are a further six stars brighter than magnitude 3.0, including three making the short straight line of the Orion's Belt asterism. Orion also hosts the radiant of the annual Orionids, the strongest meteor shower associated with Halley's Comet, and the Orion Nebula, one of the brightest nebulae in the sky. Characteristics Orion is bordered by Taurus to the northwest, Eridanus to the southwest, Lepus to the south, Monoceros to the east, and Gemini to the northeast. Covering 594 square degrees, Orion ranks 26th of the 88 constellations in size. The constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 26 sides. In the equatorial coordinate system, the right ascension coordinates of these borders lie between 04h 43.3m and 06h 25.5m , while the declination coordinates are between 22.87° and −10.97°. The constellation's three-letter abbreviation, as adopted by the International Astronomical Union in 1922, is "Ori". Orion is most visible in the evening sky from January to April, winter in the Northern Hemisphere, and summer in the Southern Hemisphere. In the tropics (less than about 8° from the equator), the constellation transits at the zenith. From May to July (summer in the Northern Hemisphere, winter in the Southern Hemisphere), Orion is in the daytime sky and thus invisible at most latitudes. However, for much of Antarctica in the Southern Hemisphere's winter months, the Sun is below the horizon even at midday. Stars (and thus Orion, but only the brightest stars) are then visible at twilight for a few hours around local noon, just in the brightest section of the sky low in the North where the Sun is just below the horizon. At the same time of day at the South Pole itself (Amundsen–Scott South Pole Station), Rigel is only 8° above the horizon, and the Belt sweeps just along it. In the Southern Hemisphere's summer months, when Orion is normally visible in the night sky, the constellation is actually not visible in Antarctica because the Sun does not set at that time of year south of the Antarctic Circle. In countries close to the equator (e.g. Kenya, Indonesia, Colombia, Ecuador), Orion appears overhead in December around midnight and in the February evening sky. Navigational aid Orion is very useful as an aid to locating other stars. By extending the line of the Belt southeastward, Sirius (α CMa) can be found; northwestward, Aldebaran (α Tau). A line eastward across the two shoulders indicates the direction of Procyon (α CMi). A line from Rigel through Betelgeuse points to Castor and Pollux (α Gem and β Gem). Additionally, Rigel is part of the Winter Circle asterism. Sirius and Procyon, which may be located from Orion by following imaginary lines (see map), also are points in both the Winter Triangle and the Circle. Features Orion's seven brightest stars form a distinctive hourglass-shaped asterism, or pattern, in the night sky. Four stars—Rigel, Betelgeuse, Bellatrix, and Saiph—form a large roughly rectangular shape, at the center of which lie the three stars of Orion's Belt—Alnitak, Alnilam, and Mintaka. His head is marked by an additional eighth star called Meissa, which is fairly bright to the observer. Descending from the Belt is a smaller line of three stars, Orion's Sword (the middle of which is in fact not a star but the Orion Nebula), also known as the hunter's sword. Many of the stars are luminous hot blue supergiants, with the stars of the Belt and Sword forming the Orion OB1 association. Standing out by its red hue, Betelgeuse may nevertheless be a runaway member of the same group. Orion's Belt, or The Belt of Orion, is an asterism within the constellation. It consists of three bright stars: Alnitak (Zeta Orionis), Alnilam (Epsilon Orionis), and Mintaka (Delta Orionis). Alnitak is around 800 light-years away from Earth, 100,000 times more luminous than the Sun, and shines with a magnitude of 1.8; much of its radiation is in the ultraviolet range, which the human eye cannot see. Alnilam is approximately 2,000 light-years from Earth, shines with a magnitude of 1.70, and with an ultraviolet light that is 375,000 times more luminous than the Sun. Mintaka is 915 light-years away and shines with a magnitude of 2.21. It is 90,000 times more luminous than the Sun and is a double star: the two orbit each other every 5.73 days. In the Northern Hemisphere, Orion's Belt is best visible in the night sky during the month of January at around 9:00 pm, when it is approximately around the local meridian. Just southwest of Alnitak lies Sigma Orionis, a multiple star system composed of five stars that have a combined apparent magnitude of 3.7 and lying at a distance of 1150 light-years. Southwest of Mintaka lies the quadruple star Eta Orionis. Orion's Sword contains the Orion Nebula, the Messier 43 nebula, Sh 2-279 (also known as the Running Man Nebula), and the stars Theta Orionis, Iota Orionis, and 42 Orionis. Three stars comprise a small triangle that marks the head. The apex is marked by Meissa (Lambda Orionis), a hot blue giant of spectral type O8 III and apparent magnitude 3.54, which lies some 1100 light-years distant. Phi-1 and Phi-2 Orionis make up the base. Also nearby is the young star FU Orionis. Stretching north from Betelgeuse are the stars that make up Orion's club. Mu Orionis marks the elbow, Nu and Xi mark the handle of the club, and Chi1 and Chi2 mark the end of the club. Just east of Chi1 is the Mira-type variable red giant star U Orionis. West from Bellatrix lie six stars all designated Pi Orionis (π1 Ori, π2 Ori, π3 Ori, π4 Ori, π5 Ori, and π6 Ori) which make up Orion's shield. Around 20 October each year, the Orionid meteor shower (Orionids) reaches its peak. Coming from the border with the constellation Gemini, as many as 20 meteors per hour can be seen. The shower's parent body is Halley's Comet. Hanging from Orion's Belt is his sword, consisting of the multiple stars θ1 and θ2 Orionis, called the Trapezium and the Orion Nebula (M42). This is a spectacular object that can be clearly identified with the naked eye as something other than a star. Using binoculars, its clouds of nascent stars, luminous gas, and dust can be observed. The Trapezium cluster has many newborn stars, including several brown dwarfs, all of which are at an approximate distance of 1,500 light-years. Named for the four bright stars that form a trapezoid, it is largely illuminated by the brightest stars, which are only a few hundred thousand years old. Observations by the Chandra X-ray Observatory show both the extreme temperatures of the main stars—up to 60,000 kelvins—and the star forming regions still extant in the surrounding nebula. M78 (NGC 2068) is a nebula in Orion. With an overall magnitude of 8.0, it is significantly dimmer than the Great Orion Nebula that lies to its south; however, it is at approximately the same distance, at 1600 light-years from Earth. It can easily be mistaken for a comet in the eyepiece of a telescope. M78 is associated with the variable star V351 Orionis, whose magnitude changes are visible in very short periods of time. Another fairly bright nebula in Orion is NGC 1999, also close to the Great Orion Nebula. It has an integrated magnitude of 10.5 and is 1500 light-years from Earth. The variable star V380 Orionis is embedded in NGC 1999. Another famous nebula is IC 434, the Horsehead Nebula, near Alnitak (Zeta Orionis). It contains a dark dust cloud whose shape gives the nebula its name. NGC 2174 is an emission nebula located 6400 light-years from Earth. Besides these nebulae, surveying Orion with a small telescope will reveal a wealth of interesting deep-sky objects, including M43, M78, and multiple stars including Iota Orionis and Sigma Orionis. A larger telescope may reveal objects such as the Flame Nebula (NGC 2024), as well as fainter and tighter multiple stars and nebulae. Barnard's Loop can be seen on very dark nights or using long-exposure photography. All of these nebulae are part of the larger Orion molecular cloud complex, which is located approximately 1,500 light-years away and is hundreds of light-years across. Due to its proximity, it is one of the most intense regions of stellar formation visible from Earth. The Orion molecular cloud complex forms the eastern part of an even larger structure, the Orion–Eridanus Superbubble, which is visible in X-rays and in hydrogen emissions. History and mythology The distinctive pattern of Orion is recognized in numerous cultures around the world, and many myths are associated with it. Orion is used as a symbol in the modern world. In Siberia, the Chukchi people see Orion as a hunter; an arrow he has shot is represented by Aldebaran (Alpha Tauri), with the same figure as other Western depictions. In Greek mythology, Orion was a gigantic, supernaturally strong hunter, born to Euryale, a Gorgon, and Poseidon (Neptune), god of the sea. One myth recounts Gaia's rage at Orion, who dared to say that he would kill every animal on Earth. The angry goddess tried to dispatch Orion with a scorpion. This is given as the reason that the constellations of Scorpius and Orion are never in the sky at the same time. However, Ophiuchus, the Serpent Bearer, revived Orion with an antidote. This is said to be the reason that the constellation of Ophiuchus stands midway between the Scorpion and the Hunter in the sky. The constellation is mentioned in Horace's Odes (Ode 3.27.18), Homer's Odyssey (Book 5, line 283) and Iliad, and Virgil's Aeneid (Book 1, line 535). In old Hungarian tradition, Orion is known as "Archer" (Íjász), or "Reaper" (Kaszás). In recently rediscovered myths, he is called Nimrod (Hungarian: Nimród), the greatest hunter, father of the twins Hunor and Magor. The π and o stars (on upper right) form together the reflex bow or the lifted scythe. In other Hungarian traditions, Orion's Belt is known as "Judge's stick" (Bírópálca). In Ireland and Scotland, Orion was called An Bodach, a figure from Irish folklore whose name literally means "the one with a penis [bod]" and was the husband of the Cailleach (hag). In Scandinavian tradition, Orion's Belt was known as "Frigg's Distaff" (friggerock) or "Freyja's distaff". The Finns call Orion's Belt and the stars below it "Väinämöinen's scythe" (Väinämöisen viikate). Another name for the asterism of Alnilam, Alnitak, and Mintaka is "Väinämöinen's Belt" (Väinämöisen vyö) and the stars "hanging" from the Belt as "Kaleva's sword" (Kalevanmiekka). There are claims in popular media that the Adorant from the Geißenklösterle cave, an ivory carving estimated to be 35,000 to 40,000 years old, is the first known depiction of the constellation. Scholars dismiss such interpretations, saying that perceived details such as a belt and sword derive from preexisting features in the grain structure of the ivory. The Babylonian star catalogues of the Late Bronze Age name Orion MULSIPA.ZI.AN.NA,[note 1] "The Heavenly Shepherd" or "True Shepherd of Anu" – Anu being the chief god of the heavenly realms. The Babylonian constellation is sacred to Papshukal and Ninshubur, both minor gods fulfilling the role of "messenger to the gods". Papshukal is closely associated with the figure of a walking bird on Babylonian boundary stones, and on the star map the figure of the Rooster is located below and behind the figure of the True Shepherd—both constellations represent the herald of the gods, in his bird and human forms respectively. In ancient Egypt, the stars of Orion were regarded as a god, called Sah. Because Orion rises before Sirius, the star whose heliacal rising was the basis for the Solar Egyptian calendar, Sah was closely linked with Sopdet, the goddess who personified Sirius. The god Sopdu is said to be the son of Sah and Sopdet. Sah is syncretized with Osiris, while Sopdet is syncretized with Osiris' mythological wife, Isis. In the Pyramid Texts, from the 24th and 23rd centuries BC, Sah is one of many gods whose form the dead pharaoh is said to take in the afterlife. The Armenians identified their legendary patriarch and founder Hayk with Orion. Hayk is also the name of the Orion constellation in the Armenian translation of the Bible. The Bible mentions Orion three times, naming it "Kesil" (כסיל, literally – fool). Though, this name perhaps is etymologically connected with "Kislev", the name for the ninth month of the Hebrew calendar (i.e. November–December), which, in turn, may derive from the Hebrew root K-S-L as in the words "kesel, kisla" (כֵּסֶל, כִּסְלָה, hope, positiveness), i.e. hope for winter rains.: Job 9:9 ("He is the maker of the Bear and Orion"), Job 38:31 ("Can you loosen Orion's belt?"), and Amos 5:8 ("He who made the Pleiades and Orion"). In ancient Aram, the constellation was known as Nephîlā′, the Nephilim are said to be Orion's descendants. In medieval Muslim astronomy, Orion was known as al-jabbar, "the giant". Orion's sixth brightest star, Saiph, is named from the Arabic, saif al-jabbar, meaning "sword of the giant". In China, Orion was one of the 28 lunar mansions Sieu (Xiù) (宿). It is known as Shen (參), literally meaning "three", for the stars of Orion's Belt. The Chinese character 參 (pinyin shēn) originally meant the constellation Orion (Chinese: 參宿; pinyin: shēnxiù); its Shang dynasty version, over three millennia old, contains at the top a representation of the three stars of Orion's Belt atop a man's head (the bottom portion representing the sound of the word was added later). The Rigveda refers to the constellation as Mriga (the Deer). Nataraja, "the cosmic dancer", is often interpreted as the representation of Orion. Rudra, the Rigvedic form of Shiva, is the presiding deity of Ardra nakshatra (Betelgeuse) of Hindu astrology. The Jain Symbol carved in the Udayagiri and Khandagiri Caves, India in 1st century BCE has a striking resemblance with Orion. Bugis sailors identified the three stars in Orion's Belt as tanra tellué, meaning "sign of three". The Seri people of northwestern Mexico call the three stars in Orion's Belt Hapj (a name denoting a hunter) which consists of three stars: Hap (mule deer), Haamoja (pronghorn), and Mojet (bighorn sheep). Hap is in the middle and has been shot by the hunter; its blood has dripped onto Tiburón Island. The same three stars are known in Spain and most of Latin America as "Las tres Marías" (Spanish for "The Three Marys"). In Puerto Rico, the three stars are known as the "Los Tres Reyes Magos" (Spanish for The Three Wise Men). The Ojibwa/Chippewa Native Americans call this constellation Mesabi for Big Man. To the Lakota Native Americans, Tayamnicankhu (Orion's Belt) is the spine of a bison. The great rectangle of Orion is the bison's ribs; the Pleiades star cluster in nearby Taurus is the bison's head; and Sirius in Canis Major, known as Tayamnisinte, is its tail. Another Lakota myth mentions that the bottom half of Orion, the Constellation of the Hand, represented the arm of a chief that was ripped off by the Thunder People as a punishment from the gods for his selfishness. His daughter offered to marry the person who can retrieve his arm from the sky, so the young warrior Fallen Star (whose father was a star and whose mother was human) returned his arm and married his daughter, symbolizing harmony between the gods and humanity with the help of the younger generation. The index finger is represented by Rigel; the Orion Nebula is the thumb; the Belt of Orion is the wrist; and the star Beta Eridani is the pinky finger. The seven primary stars of Orion make up the Polynesian constellation Heiheionakeiki which represents a child's string figure similar to a cat's cradle. Several precolonial Filipinos referred to the belt region in particular as "balatik" (ballista) as it resembles a trap of the same name which fires arrows by itself and is usually used for catching pigs from the bush. Spanish colonization later led to some ethnic groups referring to Orion's Belt as "Tres Marias" or "Tatlong Maria." In Māori tradition, the star Rigel (known as Puanga or Puaka) is closely connected with the celebration of Matariki. The rising of Matariki (the Pleiades) and Rigel before sunrise in midwinter marks the start of the Māori year. In Javanese culture, the constellation is often called Lintang Waluku or Bintang Bajak, referring to the shape of a paddy field plow. The imagery of the Belt and Sword has found its way into popular Western culture, for example in the form of the shoulder insignia of the 27th Infantry Division of the United States Army during both World Wars, probably owing to a pun on the name of the division's first commander, Major General John F. O'Ryan. The film distribution company Orion Pictures used the constellation as its logo. In artistic renderings, the surrounding constellations are sometimes related to Orion: he is depicted standing next to the river Eridanus with his two hunting dogs Canis Major and Canis Minor, fighting Taurus. He is sometimes depicted hunting Lepus the hare. He sometimes is depicted to have a lion's hide in his hand. There are alternative ways to visualise Orion. From the Southern Hemisphere, Orion is oriented south-upward, and the Belt and Sword are sometimes called the saucepan or pot in Australia and New Zealand. Orion's Belt is called Drie Konings (Three Kings) or the Drie Susters (Three Sisters) by Afrikaans speakers in South Africa and are referred to as les Trois Rois (the Three Kings) in Daudet's Lettres de Mon Moulin (1866). The appellation Driekoningen (the Three Kings) is also often found in 17th and 18th-century Dutch star charts and seaman's guides. The same three stars are known in Spain, Latin America, and the Philippines as "Las Tres Marías" (The Three Marys), and as "Los Tres Reyes Magos" (The Three Wise Men) in Puerto Rico. Even traditional depictions of Orion have varied greatly. Cicero drew Orion in a similar fashion to the modern depiction. The Hunter held an unidentified animal skin aloft in his right hand; his hand was represented by Omicron2 Orionis and the skin was represented by the five stars designated Pi Orionis. Saiph and Rigel represented his left and right knees, while Eta Orionis and Lambda Leporis were his left and right feet, respectively. As in the modern depiction, Mintaka, Alnilam, and Alnitak represented his Belt. His left shoulder was represented by Betelgeuse, and Mu Orionis made up his left arm. Meissa was his head, and Bellatrix his right shoulder. The depiction of Hyginus was similar to that of Cicero, though the two differed in a few important areas. Cicero's animal skin became Hyginus's shield (Omicron and Pi Orionis), and instead of an arm marked out by Mu Orionis, he holds a club (Chi Orionis). His right leg is represented by Theta Orionis and his left leg is represented by Lambda, Mu, and Epsilon Leporis. Further Western European and Arabic depictions have followed these two models. Future Orion is located on the celestial equator, but it will not always be so located due to the effects of precession of the Earth's axis. Orion lies well south of the ecliptic, and it only happens to lie on the celestial equator because the point on the ecliptic that corresponds to the June solstice is close to the border of Gemini and Taurus, to the north of Orion. Precession will eventually carry Orion further south, and by AD 14000, Orion will be far enough south that it will no longer be visible from the latitude of Great Britain. Further in the future, Orion's stars will gradually move away from the constellation due to proper motion. However, Orion's brightest stars all lie at a large distance from Earth on an astronomical scale—much farther away than Sirius, for example. Orion will still be recognizable long after most of the other constellations—composed of relatively nearby stars—have distorted into new configurations, with the exception of a few of its stars eventually exploding as supernovae, for example Betelgeuse, which is predicted to explode sometime in the next million years. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Tesla_Model_S] | [TOKENS: 9019]
Contents Tesla Model S The Tesla Model S is a battery-electric, four-door full-size car produced by the American automaker Tesla since 2012. The automaker's second vehicle, the Model S has been described as one of the most influential electric cars in the industry. Its various accolades include the Motor Trend Car of the Year Award in 2013. Tesla started developing the Model S around 2007 under the codename WhiteStar, with Henrik Fisker appointed as lead designer for the project. After a dispute with Elon Musk, Tesla's CEO, Fisker was replaced by Franz von Holzhausen who, by 2008, had designed the production Model S's exterior. Tesla unveiled a prototype of the vehicle in March 2009 in Hawthorne, California. In 2010, Tesla acquired a facility in Fremont, California, to produce the Model S, which was previously owned by General Motors and Toyota. Series manufacture of the car officially began at the Tesla Fremont Factory in June 2012. Tesla carried out the final assembly for European markets at its facilities in Tilburg, Netherlands, between 2013 and 2021. Constructed mostly of aluminum, the Model S shares 30 percent of its components with the Model X—a crossover SUV that was introduced in 2015. The Model S has undergone several updates during its production, the most prominent ones occurring in 2016 and 2021. These updates have usually included modifications to the motor, such as changes to power or torque, revised exterior elements, and refreshed interior features. One such change included the 2015 introduction of Tesla Autopilot—a partial vehicle automation advanced driver-assistance system. The 2021 update led to the introduction of the high-performance, three-motor Plaid—Tesla's most powerful model. Tesla announced the car's discontinuation in 2026. In 2015, the Model S was the world's best-selling plug-in electric vehicle. In 2012, it was included on Time's list of the Best Inventions of the Year, and the magazine later included it on its list of the 10 Best Gadgets of the 2010s in 2019. Car and Driver included it one its list of the best cars of the year in 2015 and 2016. In 2014, The Daily Telegraph described the Model S as a "car that changed the world". Road & Track argued that, with the introduction of the Plaid and features such as the yoke steering wheel, Tesla managed to turn the Model S into "perhaps one of the worst [cars in the world]". Development In January 2007, the American automaker Tesla Motors opened a facility in Rochester Hills, Michigan, employing sixty people to work on new projects, including a four-door sedan. Beginning development under the codename WhiteStar, Tesla planned for the car to have two powertrain options. The first would be a battery-electric version with an all-electric range of 200 miles (320 km). The second was to be a hybrid electric vehicle with a range extender, capable of traveling between 40 and 50 miles (64 and 80 km) on electric power before a small gasoline engine would recharge its batteries and power the vehicle, giving it a total range of 400 miles (640 km). However, at the GoingGreen conference in September 2008, Elon Musk—the chief executive officer of Tesla—announced that the company would exclusively produce battery-electric vehicles. In 2007, Musk appointed Henrik Fisker, known for his work with Aston Martin, as the lead designer of the WhiteStar project. Fisker signed a US$875,000 contract to design the car. The company requested that he design a "sleek, four-door sedan" priced from $50,000–$70,000 (equivalent to $77,636–$108,690 in 2024), and that it be ready between late 2009 and early 2010. Fisker owned a design studio in Orange County, California, which Tesla employees visited to view his designs. Their reactions were generally negative; Ron Lloyd, the vice president of the WhiteStar project, described the designs as "terrible [...] some of the early styles were like a giant egg". When Musk rejected his designs, Fisker attributed the decision to the project's physical constraints, saying, "they wouldn't let me make the car sexy". Shortly after the meetings, Fisker started his own eponymous company and debuted the Fisker Karma in 2008 at the North American International Auto Show. Musk filed a lawsuit against Fisker, accusing him of stealing Tesla's design ideas and using the $875,000 to launch his own company. Fisker won the lawsuit in November 2008, and an arbitrator declared Tesla's claims to be without merit and ordered Tesla to reimburse Fisker's legal fees. A small team of Tesla engineers went to a Mercedes-Benz car dealership where they test-drove a CLS and an E-Class. Both cars shared a chassis, and the engineers assessed different aspects of the two vehicles, evaluating their positives and negatives. They ultimately preferred the CLS's styling and used it as the baseline for the Model S. After purchasing a CLS, they disassembled it, modified the battery pack of a Tesla Roadster, cut out the CLS's floor, and integrated it with the battery pack. They subsequently put all of its electronics and systems in the CLS's trunk and replaced the interior. After three months of development, the engineers completed a battery-electric version of the CLS. They frequently tested the car on public roads. It had 120 miles (190 km) of all-electric range per charge and weighed more than the Roadster. In August 2008, Musk hired Franz von Holzhausen—who formerly designed for Mazda—as project WhiteStar's lead designer. Von Holzhausen reviewed Fisker's sketches and clay models but was unimpressed with what he saw, stating that "it was clear [...] that the people [who] had been working on this were novices". To save money, Tesla established its design center within a factory for SpaceX—a company also owned by Musk. As von Holzhausen began designing the exterior of the Model S, Tesla engineers initiated a project to construct another electric version of a CLS. They stripped it to its core, removed the body structure, and extended the wheelbase by four inches (10 cm) to align with early Model S specifications. Within three months, von Holzhausen had designed what would become the production Model S's exterior, and the engineers had begun building a prototype around the design. Given the battery pack's substantial weight, Musk and the team began efforts to minimize the weight of other components. To address this issue, Musk opted to use aluminum instead of steel, stating that the non-battery-pack portion of the vehicle must be lighter than equivalent gasoline vehicles. He noted that the primary challenge was that if aluminum were not used in its construction, the car's performance would be compromised. To accelerate the development of the Model S, one group of engineers worked during the day, while another arrived late evening and worked through the night, both operating within a 3,000 square feet (280 m2) tent in the SpaceX factory. Tesla debuted a prototype version of the Model S in Hawthorne, California, on March 26, 2009. In August 2009, J. B. Straubel stated that a battery pack with 300-mile (480 km) range would be available, a significant advance at the time. Tesla initially intended to manufacture the Model S in Albuquerque, New Mexico, and later in San Jose, California, but later withdrew from both plans mainly due to financial problems. During the Great Recession, American automaker General Motors decided to abandon the NUMMI facility in 2009, with Toyota soon following. A month after the last car was produced at the manufacturing line in April 2010, Toyota and Tesla announced a partnership and the transfer of the factory. Tesla agreed to purchase a significant portion of the facility for $42 million (equivalent $59 million in 2025), while Toyota invested $50 million (equivalent to $70 million in 2025) in Tesla for a 2.5 percent stake in the company. During the early 2010s, Musk expanded the engineering teams for the Model S, while von Holzhausen grew the design teams in Los Angeles. The engineers operated in a lab with forty-five personnel. The pre-production version of the Model S, featuring newly stamped body parts from the Fremont factory, a revamped battery pack, and improved power electronics, was completed in the basement of an office in Palo Alto, California. Twelve of the cars were produced; some were sent to suppliers such as Bosch, while others were preserved for testing and design alterations. On June 22, 2012, Tesla invited its employees, select customers, and the press to see the first production Model S in Fremont. Design The body and the chassis of the Model S are made mostly of aluminum. The car shares its platform and thirty percent of its parts with the Model X, a mid-size luxury crossover SUV that was introduced in 2015. The Model S is a full-size sedan with four doors and five seats; until 2018, it had an optional folding third row with rear-facing seats for two children with a five-point harness. The company claimed a drag coefficient of Cd=0.24, the lowest of any production car at release. This claim was independently verified by the magazine Car and Driver in the middle of 2014. The vehicle's drag coefficient was improved by a solid front fascia instead of a grille, retractable door handles, and a flat underbody with no exhaust pipes to disrupt the airflow. The Model S's battery pack is its heaviest component and is located inside of the car's floor. The battery pack consists of thousands of identical cylindrical 18650 battery cells, each measuring 18 millimeters (0.71 in) in diameter and 65 millimeters (2.6 in) in height. These cells feature a graphite/silicon anode, and a nickel-cobalt-aluminum cathode. The Model S has a center of gravity height of 18 inches (460 mm), reducing the risk of rollovers. Since the heavier components of the drivetrain are positioned behind the rear axle's centerline, the Model S has a weight distribution of 46 percent at the front and 54 percent at the rear. The Model S has a single-speed reduction gear transmission. Rear-wheel drive models use a single alternating current induction motor; all-wheel drive models before 2019 featured two. However, from 2019, the dual-motor models featured a rear induction motor and front permanent magnet synchronous reluctance motor.[note 2] The Plaid model, introduced in 2021, uses three permanent magnet synchronous reluctance motors. A cast aluminum cross-member attached to the vehicle's body structure supports the front suspension and electrically assisted rack-and-pinion steering system. At the rear, a cast subframe is connected to the body using four rubber-isolated mounts to reduce vibrations. The front suspension features a double control arm design, while the rear suspension uses a multi-link arrangement, each with an air spring for improved ride comfort. This chassis also features disc brake components produced by Brembo. Since the Model S lacks a front engine, Tesla implemented a "frunk",[note 3] which has 5.3 cubic feet (150 L) of storage. The car's rear trunk possesses 26.6 cubic feet (750 L) of storage with the rear seats upright and 58.1 cubic feet (1,650 L) when the seats are folded down. Initially, the seats and steering wheel of the Model S were offered in both synthetic and non-synthetic leather options. In 2017, following a request from People for the Ethical Treatment of Animals to become the first cruelty-free automaker, Tesla switched exclusively to synthetic leather. Models and updates Tesla allocated its initial 1,000 Model S units to the "Signature" limited edition configurations. The AC induction motor of the base Signature model generates a power output of 270 kW (362 hp) and a torque output of 439 N⋅m (324 lb⋅ft). The Signature Performance's motor produces 310 kW (416 hp) and 601 N⋅m (443 lb⋅ft). Both models incorporate an 85 kilowatt-hour (kWh) lithium-ion battery, and have an all-electric range of 265 miles (426 km). Beginning in 2012, three battery pack configurations of the Model S were offered as 2013 model year[note 4] vehicles. Initially, a 40 kWh lithium-ion model was planned as the entry-level version, but Tesla announced in 2013 that this version would not be produced. The motor of this version was to produce a power output of 175 kW (235 hp) and a torque of 420 N⋅m (310 lb⋅ft). Instead, a more powerful model with a 60 kWh battery—with its output limited to 40 kWh via software—was introduced to substitute the 40 kWh model. Its motor generates 225 kW (302 hp) and 430 N⋅m (317 lb⋅ft), providing it with a range of 208 miles (335 km). Two versions of the 85 kWh model were created: one with specifications similar to the aforementioned Signature model, and a performance version, the "P85", with specifications akin to the Signature Performance. In 2014, Tesla discontinued the P85, replacing it with the P85D ("D" stands for "dual"). Tesla introduced a front motor in the P85D, in addition to the existing rear motor used in previous models. This configuration powers both the front and rear wheels, resulting in an all-wheel drive powertrain. The two motors produce a combined output of 515 kW (691 hp) and 931 N⋅m (687 lb⋅ft), giving it a range of 275 miles (443 km). Replacing the 60 kWh model, the 70D was introduced as a 2015 model year vehicle. It features dual motors that produce a combined output of 383 kW (514 hp) and 387 N⋅m (285 lb⋅ft), allowing it to have a range of 240 miles (390 km). A single-motor version of the 70 kWh model was also produced, with an output of 235 kW (315 hp) and 325 N⋅m (240 lb⋅ft), giving it a range of 210 miles (340 km). In 2015, Tesla launched the standard 90D and the performance P90D to succeed the 85 kWh model and the P85D, respectively. The 90D's motor produces 311 kW (417 hp) and 658 N⋅m (485 lb⋅ft), and a range of 288 miles (463 km). The P90D's dual motors generate a combined output of 568 kW (762 hp) and 967 N⋅m (713 lb⋅ft), and the car has a range of 268 miles (431 km). In April 2016, Tesla implemented a facelift for the Model S, releasing these cars for the 2017 model year. Its most prominent update lies in its front fascia, where the previous black grille has been replaced by a continuation of the body, leaving only a thin gap between the leading edge of the hood and the bumper, which houses the Tesla logo. The updated model also includes restyled, full-LED adaptive headlights that turn with the car to enhance visibility at night. That same year, Tesla reintroduced the 60 kWh model and introduced an all-wheel-drive version, the 60D. The former produces 235 kW (315 hp) of power and 325 N⋅m (240 lb⋅ft) of torque, giving it a range of 210 miles (340 km). The latter has dual motors that produce 242 kW (324 hp) and 430 N⋅m (317 lb⋅ft), with a range of 253 miles (407 km). Customers also had the option to upgrade the battery capacity to 75 kWh through an over-the-air update, extending the range by 40 miles (64 km). In March 2017, Tesla discontinued the 60 kWh model to distinguish its premium cars from the cheaper options, making the 75 kWh model the new entry-level offering. In late 2016, Tesla introduced the P100D as a replacement for the P90D. The P100D's motors generate a combined output 510 kW (680 hp) and 1,072 N⋅m (791 lb⋅ft), allowing it to have a range of 315 miles (507 km). In early 2017, Tesla introduced the 100D. Its dual motors deliver 360 kW (483 hp) and 660 N⋅m (487 lb⋅ft), and it has a range of 335 miles (539 km). Midway through 2017, Tesla discontinued the 90D. Tesla subsequently ended production of the rear-wheel-drive 75 kWh model in late 2017. In 2019, Tesla replaced the 75D, 100D, and P100D variants as part of the company's shift towards a revamped model range. In favor of a more streamlined lineup, in the middle of 2019, the previous 75D, 100D, and P100D models were replaced with the Standard Range, Long Range, and Performance models, respectively; the foremost model was discontinued later that year. The Performance and Long Range variants feature a permanent magnet synchronous reluctance motor—initially used in the Model 3—as the front motor, while the rear motor remains an AC induction unit. The Model S Long Range, equipped with a 100 kWh battery, has dual motors that generate a total output of 350 kW (469 hp) and 730 N⋅m (540 lb⋅ft), giving the Long Range a range of 375 miles (604 km). The Performance model's two motors produce a combined output of 562 kW (754 hp) and 931 N⋅m (687 lb⋅ft); it also has a 100 kWh battery and a range of 365 miles (587 km). For 2020, the Long Range model was replaced with the Long Range Plus. Its dual motors deliver a combined output of 311 kW (417 hp) and 658 N⋅m (485 lb⋅ft). It has a range of 400 miles (640 km). In 2021, Tesla launched a significant update to the Model S, known internally as the "Palladium" project, which involved an overhaul of most of its components and spawned the high-performance Plaid. The revised Model S was revealed in January 2021. At its debut, the updated Model S had the lowest drag coefficient of any current production automobile, with a value of Cd=0.208. The updated Long Range delivers 500 kW (670 hp) and achieves a range of 405 miles (652 km). The Plaid, which features a 95 kWh battery, has—in contrast to all other models—three permanent magnet synchronous motors, as well as an all-wheel drive layout. The trio produce a total output of 760 kW (1,020 hp) and 1,050 N⋅m (770 lb⋅ft), providing the car with a 0 to 60 mph (97 km/h) acceleration of 1.98 seconds and a maximum speed of 200 mph (322 km/h), with a range of 390 miles (630 km). In 2023, Tesla reintroduced the Standard Range model, which has a range of 370 miles (600 km). Musk announced that 2026 would be the final model year for both the Model S and Model X. For this model year, Tesla debuted an updated Model S and Model X in June 2025, which the company announced that February. A front bumper camera and changed chassis was implemented in both vehicles, and the range of the Long Range model was increased to 410 miles (660 km). The Model S Plaid received an updated rear diffuser and slightly altered front fascia, which the company stated was to optimize "high-speed stability". Technology The instrument panel is positioned directly before the driver and features a 12.3-inch (310 mm) liquid crystal display electronic instrument cluster. Initially, the infotainment control touchscreen featured a 17-inch (430 mm) multi-touch display divided into four sections. The top section shows status icons and offers quick access to features like charging, HomeLink, Driver Profiles, vehicle information, and Bluetooth. Below that, the second section provides access to various apps, such as Media, Navigation, Energy, Web, Camera, and Phone. The central viewing area displays two active apps, split into upper and lower areas, with most apps expandable to fill the entire screen. The bottom section contains controls and settings for the vehicle, including doors, locks, lights, temperature settings, and a secondary volume control. Originally, the Model S's touchscreen was powered by a Nvidia Tegra 3 3D Visual Computing Module (VCM), with a separate Nvidia Tegra 2 VCM handling the instrument cluster. Around 2018, Tesla upgraded these two Tegra System-on-a-Chip (SoC) units to a single Intel Atom–based SoC, which powered both the main touchscreen display and the instrument cluster. With the Palladium refresh, Tesla further updated the system, switching to a horizontal touchscreen orientation and an AMD Ryzen-based SoC. The touchscreen includes features like driver-side climate control, My App, the app launcher, recent apps, passenger-side climate control, and volume control.[note 5] Features, such as lock and unlock, trunk, glove box, and mirrors, could be controlled from the touchscreen. Also for the 2021 refresh, Tesla implemented a "yoke" steering wheel.[note 6] In 2014, Tesla introduced Autopilot, an advanced driver-assistance system developed by the automaker that amounts to partial vehicle automation. Every Model S produced from September 2014 onward included the Autopilot hardware, and it was officially released in October 2015 as a software update. Autopilot uses cameras, radar and ultrasound to detect road signs, lane markings, obstacles, pedestrians, cyclists, motorcyclists, traffic lights, and other vehicles. It also includes adaptive cruise control, lane centering, auto lane changing, auto parking and other semi-autonomous driving and parking capabilities. The Model S's operating systems are partly built using open-source software (OSS), which is publicly available. Tesla uses OSS like Linux, the GNU toolchain, Buildroot, and community projects like Ubuntu. From 2021, Tesla began using a system known as "Tesla Vision", which relies solely on cameras, replacing the previous radar-based sensors. In 2023, Tesla discontinued the ultrasonic system as part of its shift towards Tesla Vision. The Autopilot system has been the subject of criticism. Following a crash in Florida, the National Transportation Safety Board found that the driver's usage of the system "indicated an over-reliance on the automation and a lack of understanding of the system limitations". Tesla has faced accusations of misleading advertising, with critics alleging that the company led consumers to believe the vehicles were fully autonomous. Tesla has defended itself by arguing that the state's prolonged lack of objection to the Autopilot branding implied approval of its advertising practices.[note 7] In a 2019 survey by Bloomberg News, hundreds of Tesla owners reported experiencing dangerous behaviors with Autopilot, including phantom braking, lane departures, and failure to stop for road hazards. Users also noted issues like sudden software crashes, unexpected shutdowns, collisions with off-ramp barriers, radar failures, abrupt swerving, tailgating, and inconsistent speed changes. Tesla has devised numerous ways to charge the Model S: a 240-volt home wall connector, which provides up to 44 miles (71 km) of range per hour of charging; and a mobile connector, intended for use away from home, which offers up to 30 miles (48 km) of range per hour. Models prior to 2016 could be configured with two onboard chargers, which provide up to 62 miles (100 km) of range per hour. Tesla partnered with businesses to install Tesla Wall Connectors to provide a public charging network called Tesla Destination. The units are provided to the businesses by Tesla for free or at a cheap price. The business is responsible for the cost of electricity. Some businesses limit them to customers, employees, or residents only. In late 2012, Tesla began operating a network of 480-volt charging stations, dubbed Superchargers. Tesla initially planned for the Model S to allow fast battery swapping. In 2013, the company demonstrated a battery-swap operation that took about ninety seconds—roughly half the time needed to refill a gas tank. While Tesla initially planned to make battery swapping widely available, they reportedly abandoned the idea due to a perceived lack of customer interest. Jeremy Michalek, a mechanical engineering professor, suggested that the high cost, bulkiness, and resource demands of batteries made the creation of extensive networks of swappable packs—requiring storage, charging, and maintenance—economically and environmentally impractical. Critics have accused Tesla of exploiting California's zero-emission vehicle credit system by introducing the battery-swap program without ever making it accessible to the public. In 2020, Tesla announced plans to integrate the batteries into the vehicle's body to enhance strength and reduce weight and cost. Environmental impact A 2015 study by the Union of Concerned Scientists (UCS) concluded that in U.S. regions where the Model S is popular, its 68 percent higher manufacturing emissions are offset within a few years of average driving. However, the UCS report assumes that electric materials are recycled at rates similar to other cars and excludes the issue of battery disposal due to limited data on recycling practices and future intentions at the time. Over their lifecycle, electric vehicles—like the Model S—emit about half as much CO2 as comparable fossil fuel cars. The lithium-ion batteries within the Model S contain nickel and small amounts of cobalt, which have a high environmental impact due to resource depletion, ecological toxicity, and extraction processes. In 2021, Tesla wrote in its impact report that it recycles all returned battery packs. It stated that Gigafactory Nevada can recycle up to 92 percent of the elements from old batteries, creating a "closed loop" system where old batteries are turned into new ones. In 2020, the company recycled significant amounts of metals: 1,300 tons of nickel, 400 tons of copper, and 80 tons of cobalt. Tesla's report states that most of its batteries are recycled in some form; according to Vice, it does not specify that 92 percent of each individual battery is fully recycled. The company has articulated an ultimate goal of achieving "high recovery rates, low costs, and low environmental impact" through its recycling program, though it does not provide details on its progress toward this. A 2021 scientific study showed that recycling the Tesla Model S battery pack is profitable due to its low disassembly costs and high revenues from cobalt recovery. The materials scientist Dana Thompson from the University of Leicester cautions that the recycling of batteries may pose significant hazards. According to Thompson, if a Tesla cell is punctured too deeply or at an inappropriate location, it risks short-circuiting, potentially leading to combustion and the release of toxic fumes. Production and initial deliveries The Model S is the company's second vehicle and, as of 2025[update], its longest-produced model. It has been produced at the 5,400,000 square feet (500,000 m2) Fremont, California, facility since June 2012. Tesla initially projected it would produce 1,000 units per month, aiming for a total of 5,000 units by the end of 2012. For 2013, Tesla aimed to quadruple that. Tesla built its 1,000th Model S by October 31, 2012, and delivered 2,650 units by the end of the year. In the first half of the subsequent year, 10,050 units were delivered to customers. From August 2013, for European countries, final assembly was carried out at Tesla's facilities in Tilburg, Netherlands. The aim of the Tilburg factory was to shorten delivery times for customers in Britain and the EU, improve product quality, and establish the automaker's presence in Europe by producing the Model S and the Model X. The assembly of both the Model S and Model X at the Tilburg facility ceased in early 2021. According to the Dutch newspaper NU.nl, the 2021 refresh introduced changes to the production process that made it impossible to complete final assembly at the Tilburg location. The Model S was the first vehicle by Tesla produced at the Fremont facility. It was followed by the Model X in 2015, the Model 3 in 2017 and the Model Y in 2020. These cars form the "S3XY" acronym. In 2015, the Model S was the world's best-selling plug-in electric vehicle, with Tesla selling 50,366 in that year. It was the second-best-selling electric car in the first half of 2016 after the Nissan Leaf. Since its inception, the Model S has been equipped with batteries supplied primarily by the electronics company Panasonic in Japan. Since January 2017, the car's batteries have also been produced at Gigafactory Nevada. European retail deliveries began between August and September 2013, with Norway, Switzerland, the Netherlands, Belgium, France, and Germany. The first Australian delivery took place in Sydney on December 9, 2014. Deliveries to the mainland Chinese market began on April 22, 2014, followed by Hong Kong in July 2014. Deliveries to the United Kingdom began in June 2014. Musk announced that production of the Model S and Model X would cease by mid‑2026, following an 11 percent year‑over‑year decline in Tesla's automotive revenue in 2025. He also stated that the Fremont facility, where the vehicles are manufactured, will be repurposed for production of Tesla's forthcoming Optimus robot. Safety In a European New Car Assessment Programme testing conducted in 2022, the Model S received a five-star rating: In a National Highway Traffic Safety Administration (NHTSA) testing conducted in 2015, the Model S received a five-star rating. Tesla subsequently claimed that—based on the details of the test—it actually achieved 5.4 stars, prompting the NHTSA to release a statement reaffirming that it does not award more than five stars, and that Tesla was "misleading the public" by claiming in their marketing that the NHTSA had awarded them a higher rating. On June 14, 2013, Tesla recalled Model S vehicles manufactured between May 10 and June 8, 2013, due to improper methods for aligning the left hand seat back striker to the bracket, which could weaken the weld between the bracket and frame. Musk stated that the weld had not detached on any car, there had been no complaints, and no injuries had occurred. In early January 2014, Tesla issued a recall for Model S vehicles from 2013 due to the risk of overheating with the adapter, cord, or wall outlet during charging. Following the recall, Jérôme Guillen, Tesla's vice president of sales, announced that nearly all Model S adapters had already been updated via over-the-air software to address the charging problem. Tesla noted that the recall impacted nearly all Model S vehicles and adapters produced in 2013. Tesla announced a voluntary recall on November 20, 2015, of all of its 90,000 Model S vehicles, to check for a possible defect in the cars' front seat belt assemblies. The problem was raised by one customer in Europe. Tesla's resulting investigation was unable to identify a root cause for the failure, and the company decided to examine every car. Tesla reported that no accidents or injuries were related to the problem. On January 20, 2017, Tesla recalled every Model S manufactured from 2012 because of defective Takata airbags. This recall not only impacted the Model S but also affected about 652,000 other vehicles from other automakers across the United States, which, at the time, was the largest automotive recall in the country's history. On April 20, 2017, Tesla issued a worldwide recall of 53,000 of the 76,000 Model S and Model X vehicles sold in 2016 due to faulty parking brakes. Tesla assured that this issue was unlikely to cause safety problems and had not resulted in any accidents or injuries. Despite this, the company asked customers to have their cars inspected, a process that took about forty-five minutes. About five percent of the vehicles were affected, and Brembo, the supplier of the defective part, would cover the repair costs. All 123,000 Model S cars manufactured before April 2016 were recalled on March 30, 2018, due to excessive corrosion of the bolts which secure the power steering, particularly those cars used in cold countries where roads are salted. Tesla's stock dropped nearly four percent in after-hours trading on Thursday following the announcement of the Model S recall. In December 2021, 119,009 Model S vehicles produced between 2017 and 2020 were recalled because of the possibility of latch failure allowing front hoods to open unexpectedly. The recall, according to the company, affected around 14 percent of all Model S vehicles. In February 2024, Tesla recalled over two million Tesla vehicles in the United States due to the compact size of the warning lights on the instrument panel. Documents indicated that the recall was issued to enhance warnings and alerts for drivers. The NHTSA reported that the font size of the brake, park, and antilock brake warning lights was smaller than mandated by federal safety standards. This size made information difficult to read, thereby increasing the risk of a collision. The Model S was part of a major recall in July 2024 affecting approximately 1.8 million vehicles. The recall addressed a software issue that could prevent the detection of an unlatched hood, posing a risk of it unexpectedly opening while the vehicle was in motion. According to NHTSA documentation, this malfunction could obstruct the driver's vision, increasing the likelihood of a crash. A fire involving a Model S occurred on October 1, 2013, after the vehicle struck metal debris on Washington State Route 167 in Kent, Washington. The driver was alerted by the onboard system and was able to safely exit the highway, stop the car, and leave the vehicle without injury. Tesla later explained that the fire was triggered by a "direct impact of a large metallic object" to one of the car's 16 battery modules. The vehicle's design, which included firewalls separating the modules, limited the fire to a small section at the front of the car. The debris that caused the fire was identified as a "curved section" that had fallen off a truck and was recovered nearby. According to Tesla, the debris pierced a 3-inch (80 mm) hole through the vehicle's 0.25 in (6 mm) armor plate, with an estimated force of 25 short tons (23 t). Vents directed the flames away from the passenger compartment, preventing them from entering the cabin. On October 24, 2013, the NHTSA announced that it had not found evidence suggesting the fire resulted from a vehicle safety defect or noncompliance with federal safety standards. However, in the following month, the NHTSA initiated a preliminary evaluation to assess the potential risks associated with undercarriage strikes on 2013 Tesla Model S vehicles. On March 28, 2014, the investigation was closed, with the NHTSA stating that "Tesla's revision of vehicle ride height and addition of increased underbody protection should reduce both the frequency of underbody strikes and the resultant fire risk". On November 6, 2013, another fire occurred when a Tesla Model S struck a tow hitch on the road, causing damage to the underside of the vehicle. In response to these incidents, Tesla extended its vehicle warranty to cover fire damage and issued a software update to increase the car's ground clearance at highway speeds. In early February 2014, another fire incident was reported in Toronto, Canada. The Model S was parked in a garage and was not charging at the time. The cause of the fire remains undetermined. Tesla stated, "in this particular case, we don't yet know the precise cause, but have definitively determined that it did not originate in the battery, the charging system, the adapter or the electrical receptacle, as these components were untouched by the fire". On January 1, 2016, a 2014 Model S caught fire in Norway while supercharging unsupervised. The vehicle was destroyed but nobody was injured. An investigation by the Norwegian Accident Investigation Board concluded that the fire started within the car, but the exact cause could not be determined. In March 2016, Tesla announced that their own investigation found that the fire was caused by a short circuit in the vehicle's distribution box, but the extent of the damage made it impossible to determine the exact cause. A three-month-old Model S caught fire three times in December 2018, requiring firefighters to spend nearly ten hours preventing reignition. In July 2021, a Model S Plaid caught fire, and its electronic door system failed, forcing the driver to "use force to push it open". The vehicle then moved approximately 35 to 40 feet (11 to 12 m) before erupting into a "fireball". Reception and legacy The Model S has been referred to by several critics as one of the most influential and important electric cars. In a 2014 review for the newspaper The Sunday Times, Nick Rufford remarked, "the Model S represents the last throw of the electric dice [...] if this vehicle can't persuade people to ditch petrol and switch to battery power, no car can". In 2014, The Daily Telegraph included the Model S on its list of "cars that changed the world" and called it the most important car of the last 20 years. The BBC-owned magazine Top Gear described it as "one of the most appealing electric vehicles in the world [...] and one that almost single-handedly forced mainstream manufacturers to embrace electricity". Keith Barry of Consumer Reports mentioned that the introduction of specific features, such as a yoke-style steering wheel, has "distracted from the flagship sedan's underlying brilliance, as has Musk's public image".[note 9] Consumer Reports additionally pointed out that the success of the Model S prompted other automakers to rethink how they design and market their vehicles. The magazine Car and Driver noted that the Model S was the "first long-range, widely desired electric vehicle" when it was released, adding that "mainstream automakers [...] [struggled] to catch up". The Model S has received mixed reviews from automotive critics. Samuel Gibbs from the newspaper The Guardian referred to it as a "swish saloon car", writing that, unlike many other electric vehicles, it did not resemble "a bug or bubble-car". Gibbs was also impressed by its acceleration, remarking that it has "it has enough power to beat even the Aston Martin Rapide, all without petrol and with no emissions". Reviewing for The Independent, Lee Williams called the Model S "a beautiful car that symbolizes humanity's march towards automation", but criticized its large size, describing the car as "too damn big". Road & Track's Chris Perkins argued that Tesla managed to turn the "most important car of the century into a bad joke", describing the Model S Plaid as "perhaps one of the worst [cars in the world]". He called its yoke steering wheel "incredibly stupid", described its damping as "irritating", and stated that "it doesn't have the chassis, steering, or brakes to deal with the horsepower". The U.S. News & World Report thought that its "basic interior feels out of step with its price, and newer rivals offer more room, style and, in some cases, range". Lee Hutchinson, the senior technology writer for Ars Technica, opined that its "almond-shaped headlights and prominent nosecone conjure images of Maserati, while the rear half has a distinct Aston Martin DBS flavor, [and] the taillights and rear evoke the Jaguar XF". While being in two completely different classes, critics frequently compare the Model S to the first generation of the Nissan Leaf, a hatchback. Hutchinson, in another review, thought of its acceleration as "instant, ludicrous, [and] neck-snapping", believing that it was "more appropriate for a roller-coaster than a car". He described its styling as "graceful, with a precisely engineered exterior". Mat Watson, prominent for his Carwow reviews, praised the Model S Plaid as "astonishingly quick" and "extremely quiet", but he criticized its high price and noted that competing models offer greater comfort. Watson ultimately rated it eight out of ten. Writing for Car, Keith Adams described the Model S as "the king of the hill". He called the thrust "stomach-churning from rest", believing that the driver would "crave to relive the experience—again and again". Jalopnik's Lawrence Hodge criticized the yoke steering wheel, describing it as "stupid" and suggesting that its introduction was more of a downgrade than an upgrade. In 2012, Time magazine named the Model S one of the best inventions of the year. It was later featured in the magazine's list of the 10 best gadgets of the 2010s. Car and Driver included the Model S 60 on its list of the 10 best cars of the year in 2015, while entering the 70 and 70D models on its 2016 list. Some companies have developed modified cars based on the Model S with different body styles. In February 2019, a one-off version of the Model S with a shooting brake body style, named the Model SB, was announced by Niels van Roij Design. While an initial production run of twenty was considered, only a single unit was built. The single unit was finished in British racing green, mirrored by the glove compartment lining, a color choice inspired by the green found in the logo of Elipo, the company that assisted in the car's design, as well as foliage in Elipo owner Floris de Raadt's garden. It made its public debut at the Geneva Motor Show in March 2019. In 2020 and 2023, coachbuilders Coleman Milne and Binz debuted their hearse conversions of the Model S, named the Wisper and Binz.E, respectively. Both versions have 220 miles (350 km) of electric range. The Model S is the recipient of numerous awards, as listed in the table below:[note 11] Notes References Bibliography External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Internet#cite_note-UNESCO-72] | [TOKENS: 9291]
Contents Internet The Internet (or internet)[a] is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks that comprises private, public, academic, business, and government networks of local to global scope, linked by electronic, wireless, and optical networking technologies. The Internet carries a vast range of information services and resources, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, discussion groups, internet telephony, streaming media and file sharing. Most traditional communication media, including telephone, radio, television, paper mail, newspapers, and print publishing, have been transformed by the Internet, giving rise to new media such as email, online music, digital newspapers, news aggregators, and audio and video streaming websites. The Internet has enabled and accelerated new forms of personal interaction through instant messaging, Internet forums, and social networking services. Online shopping has also grown to occupy a significant market across industries, enabling firms to extend brick and mortar presences to serve larger markets. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet date back to research that enabled the time-sharing of computer resources, the development of packet switching, and the design of computer networks for data communication. The set of communication protocols to enable internetworking on the Internet arose from research and development commissioned in the 1970s by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense in collaboration with universities and researchers across the United States and in the United Kingdom and France. The Internet has no single centralized governance in either technological implementation or policies for access and usage. Each constituent network sets its own policies. The overarching definitions of the two principal name spaces on the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the non-profit Internet Engineering Task Force (IETF). Terminology The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1945 by the United States War Department in a radio operator's manual, and in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks. The word Internet may be capitalized as a proper noun, although this is becoming less common. This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases. The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web, or the Web, is only one of a large number of Internet services. It is the global collection of web pages, documents and other web resources linked by hyperlinks and URLs. History In the 1960s, computer scientists began developing systems for time-sharing of computer resources. J. C. R. Licklider proposed the idea of a universal network while working at Bolt Beranek & Newman and, later, leading the Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA) of the United States Department of Defense. Research into packet switching,[c] one of the fundamental Internet technologies, started in the work of Paul Baran at RAND in the early 1960s and, independently, Donald Davies at the United Kingdom's National Physical Laboratory in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network was incorporated into the design of the ARPANET, an experimental resource sharing network proposed by ARPA. ARPANET development began with two network nodes which were interconnected between the University of California, Los Angeles and the Stanford Research Institute on 29 October 1969. The third site was at the University of California, Santa Barbara, followed by the University of Utah. By the end of 1971, 15 sites were connected to the young ARPANET. Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States. Other user networks and research networks, such as the Merit Network and CYCLADES, were developed in the late 1960s and early 1970s. Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSAR and, later, NDRE) and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first internetwork for resource sharing. ARPA projects, the International Network Working Group and commercial initiatives led to the development of various protocols and standards by which multiple separate networks could become a single network, or a network of networks. In 1974, Vint Cerf at Stanford University and Bob Kahn at DARPA published a proposal for "A Protocol for Packet Network Intercommunication". Cerf and his graduate students used the term internet as a shorthand for internetwork in RFC 675. The Internet Experiment Notes and later RFCs repeated this use. The work of Louis Pouzin and Robert Metcalfe had important influences on the resulting TCP/IP design. National PTTs and commercial providers developed the X.25 standard and deployed it on public data networks. The ARPANET initially served as a backbone for the interconnection of regional academic and military networks in the United States to enable resource sharing. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89. Although other network protocols such as UUCP and PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. The linking of commercial networks and enterprises by the early 1990s, as well as the advent of the World Wide Web, marked the beginning of the transition to the modern Internet. Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Later in 1990, Tim Berners-Lee began writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also an HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance. Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. In November 2006, the Internet was included on USA Today's list of the New Seven Wonders. As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. Modern smartphones can access the Internet through cellular carrier networks, and internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016. As of 2018[update], 80% of the world's population were covered by a 4G network. The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years, especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most. One solution, zero-rating, is the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost. Social impact The Internet has enabled new forms of social interaction, activities, and social associations, giving rise to the scholarly study of the sociology of the Internet. Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion. By 2010, 22% of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users. China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022, China had a 70% penetration rate compared to India's 60% and the United States's 90%. In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and the Caribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania. In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highest Internet penetration by the number of users, with 93% or more of the population with access. As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). Modern character encoding standards, such as Unicode, allow for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly.[citation needed] Educational material at all levels from pre-school (e.g. CBeebies) to post-doctoral (e.g. scholarly literature through Google Scholar) is available on websites. The internet has facilitated the development of virtual universities and distance education, enabling both formal and informal education. The Internet allows researchers to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population".: 111 Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all sites in terms of traffic. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Streaming media is the real-time delivery of digital media for immediate consumption or enjoyment by end users. Streaming companies (such as Netflix, Disney+, Amazon's Prime Video, Mubi, Hulu, and Apple TV+) now dominate the entertainment industry, eclipsing traditional broadcasters. Audio streamers such as Spotify and Apple Music also have significant market share in the audio entertainment market. Video sharing websites are also a major factor in the entertainment ecosystem. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses a web player to stream and show video files. YouTube users watch hundreds of millions, and upload hundreds of thousands, of videos daily. Other video sharing websites include Vimeo, Instagram and TikTok.[citation needed] Although many governments have attempted to restrict both Internet pornography and online gambling, this has generally failed to stop their widespread popularity. A number of advertising-funded ostensible video sharing websites known as "tube sites" have been created to host shared pornographic video content. Due to laws requiring the documentation of the origin of pornography, these websites now largely operate in conjunction with pornographic movie studios and their own independent creator networks, acting as de-facto video streaming services. Major players in this field include the market leader Aylo, the operator of PornHub and numerous other branded sites, as well as other independent operators such as xHamster and Xvideos. As of 2023[update], Internet traffic to pornographic video sites rivalled that of mainstream video streaming and sharing services. Remote work is facilitated by tools such as groupware, virtual private networks, conference calling, videotelephony, and VoIP so that work may be performed from any location, such as the worker's home.[citation needed] The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software, which allow groups to easily form, cheaply communicate, and share ideas. An example of collaborative software is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice).[citation needed] Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work.[citation needed] The internet also allows for cloud computing, virtual private networks, remote desktops, and remote work.[citation needed] The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment, including insults and hate speech, to, in extreme cases, rape and death threats, in response to posts they have made on social media. Social media companies have been criticized in the past for not doing enough to aid victims of online abuse. Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from pornography or violent content on the Internet. The most popular social networking services commonly forbid users under the age of 13. However, these policies can be circumvented by registering an account with a false birth date, and a significant number of children aged under 13 join such sites.[citation needed] Social networking services for younger children, which claim to provide better levels of protection for children, also exist. Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.[citation needed] Cyberslacking can become a drain on corporate resources; employees spend a significant amount of time surfing the Web while at work. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. Electronic business encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion in 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. A 2013 Institute for Local Self-Reliance report states that brick-and-mortar retailers employ 47 people for every $10 million in sales, while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people. Advertising on popular web pages can be lucrative, and e-commerce. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television.: 19 Many common online advertising practices are controversial and increasingly subject to regulation. The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism. Social media websites, such as Facebook and Twitter, helped people organize the Arab Spring, by helping activists organize protests, communicate grievances, and disseminate information. Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies. E-government is the use of technological communications devices, such as the Internet, to provide public services to citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens. Cybersectarianism is a new organizational form that involves: highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards. In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.[citation needed] Applications and services The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services. The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. HyperText Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet. World Wide Web browser software, such as Microsoft Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain computer data, including graphics, sounds, text, video, multimedia and interactive content. Client-side scripts can include animations, games, office applications and scientific demonstrations. Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Internet telephony is a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP).[citation needed] VoIP systems now dominate many markets, being as easy and convenient as a traditional telephone, while having substantial cost savings, especially over long distances. File sharing is the practice of transferring large amounts of data in the form of computer files across the Internet, for example via file servers. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. Access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by a digital signature. Governance The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the IETF. The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices when implementing Internet technologies. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. The organization coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. Regional Internet registries (RIRs) were established for five regions of the world to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region:[citation needed] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.[citation needed] Infrastructure The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, and modems. However, as an example of internetworking, many of the network nodes are not necessarily Internet equipment per se. Internet packets are carried by other full-fledged networking protocols, with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.[citation needed] Internet service providers (ISPs) establish worldwide connectivity between individual networks at various levels of scope. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fiber-optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy.[citation needed] An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.[citation needed] Common methods of Internet access by users include broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology.[citation needed] Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. Most servers that provide internet services are today hosted in data centers, and content is often accessed through high-performance content delivery networks. Colocation centers often host private peering connections between their customers, internet transit providers, cloud providers, meet-me rooms for connecting customers together, Internet exchange points, and landing points and terminal equipment for fiber optic submarine communication cables, connecting the internet. Internet Protocol Suite The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in RFC 1122 and RFC 1123:[citation needed] The most prominent component of the Internet model is the Internet Protocol. IP enables internetworking, essentially establishing the Internet itself. Two versions of the Internet Protocol exist, IPv4 and IPv6.[citation needed] Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe the exchange of data over the network.[citation needed] For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via Dynamic Host Configuration Protocol, or are configured.[citation needed] Domain Name Systems convert user-inputted domain names (e.g. "en.wikipedia.org") into IP addresses.[citation needed] Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted. Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries began to urge all resource managers to plan rapid adoption and conversion. By design, IPv6 is not directly interoperable with IPv4. Instead, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities exist for internetworking, and some nodes have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol.[citation needed] Network infrastructure, however, has been lagging in this development.[citation needed] A subnet or subnetwork is a logical subdivision of an IP network.: 1, 16 Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.[citation needed] The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 198.51.100.0/24 is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range 198.51.100.0 to 198.51.100.255 belong to this network. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.[citation needed] For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the subnet mask for the prefix 198.51.100.0/24.[citation needed] Computers and routers use routing tables in their operating system to forward IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.[citation needed] The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. Security Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information. Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users.[citation needed] Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale. Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. Malware can be designed to evade antivirus software detection algorithms. The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Under the Act, all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic.[d] The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software were allegedly installed by German Siemens AG and Finnish Nokia. Some governments, such as those of Myanmar, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive specific on individual computers or networks in order to limit access by children to pornographic material or depiction of violence.[citation needed] Performance As the Internet is a heterogeneous network, its physical characteristics, including, for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization. PB per monthYear020,00040,00060,00080,000100,000120,000140,000199019952000200520102015Petabytes per monthGlobal Internet Traffic Volume The volume of Internet traffic is difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.[citation needed] An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis. In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer-reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files. See also Notes References Sources Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_note-FOOTNOTEKent2001516-99] | [TOKENS: 10728]
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Perchlorate] | [TOKENS: 6662]
Contents Perchlorate A perchlorate is a chemical compound used mainly in making rocket fuel and fireworks, as well as some other industrial uses. In many countries it is used to treat hyperthyroidism, but (mainly because of its effect on the thyroid gland) it is an environmental toxin which endangers human health when it contaminates food and water. Perchlorates contain the perchlorate ion, ClO−4, the conjugate base of perchloric acid (ionic perchlorate). As counterions, there can be metal cations, quaternary ammonium cations or other ions, for example, nitronium cation (NO+2). The term perchlorate can also describe perchlorate esters or covalent perchlorates. These are organic compounds that are alkyl or aryl esters of perchloric acid. They are characterized by a covalent bond between an oxygen atom of the ClO4 moiety and an organyl group. In most ionic perchlorates, the cation is non-coordinating. The majority of ionic perchlorates are commercially produced salts commonly used as oxidizers for pyrotechnic devices and for their ability to control static electricity in food packaging. Additionally, they have been used in rocket propellants, fertilizers, and as bleaching agents in the paper and textile industries. Ionic perchlorates are typically colorless solids that exhibit good solubility in water. The perchlorate ion forms when they dissolve in water, dissociating into ions. Many perchlorate salts also exhibit good solubility in non-aqueous solvents. Four perchlorates are of primary commercial interest: ammonium perchlorate (NH4)ClO4, perchloric acid HClO4, potassium perchlorate KClO4 and sodium perchlorate NaClO4. Production Very few chemical oxidants are strong enough to convert chlorate to perchlorate. Persulfate, ozone, or lead dioxide are all known to do so, but the reactions are too delicate and low-yielding for commercial viability. Perchlorate salts are typically manufactured through the process of electrolysis, which involves oxidizing aqueous solutions of corresponding chlorates. This technique is commonly employed in the production of sodium perchlorate, which finds widespread use as a key ingredient in rocket fuel. Perchlorate salts are also commonly produced by reacting perchloric acid with bases, such as ammonium hydroxide or sodium hydroxide. Ammonium perchlorate, which is highly valued,[why?] can also be produced via an electrochemical process. Perchlorate esters are formed in the presence of a nucleophilic catalyst via a perchlorate salt's nucleophilic substitution onto an alkylating agent. Uses Chemical properties The perchlorate ion is the least redox reactive of the generalized chlorates. Perchlorate contains chlorine in its highest oxidation number (+7). A table of reduction potentials of the four chlorates shows that, contrary to expectation, perchlorate in aqueous solution is the weakest oxidant among the four. These data show that the perchlorate and chlorate are stronger oxidizers in acidic conditions than in basic conditions. Gas phase measurements of heats of reaction (which allow computation of ΔfH°) of various chlorine oxides do follow the expected trend wherein Cl2O7 exhibits the largest endothermic value of ΔfH° (238.1 kJ/mol) while Cl2O exhibits the lowest endothermic value of ΔfH° (80.3 kJ/mol). As perchloric acid is one of the strongest mineral acids, perchlorate is a very weak base in the sense of Brønsted–Lowry acid–base theory. As it is also generally a weakly coordinating anion, perchlorate is commonly used as a background, or supporting, electrolyte. Perchlorate compounds oxidize organic compounds, especially when the mixture is heated. The explosive decomposition of ammonium perchlorate is catalyzed by metals and heat. As perchlorate is a weak Lewis base (i.e., a weak electron pair donor) and a weak nucleophilic anion, it is also a very weakly coordinating anion. This is why it is often used as a supporting electrolyte to study the complexation and the chemical speciation of many cations in aqueous solution or in electroanalytical methods (voltammetry, electrophoresis…). Although the perchlorate reduction is thermodynamically favorable (∆G < 0; E° > 0), and that ClO−4 is expected to be a strong oxidant, most often in aqueous solution, it is practically an inert species behaving as an extremely slow oxidant because of severe kinetics limitations. The metastable character of perchlorate in the presence of reducing cations such as Fe2+ in solution is due to the difficulty to form an activated complex facilitating the electron transfer and the exchange of oxo groups in the opposite direction. These strongly hydrated cations cannot form a sufficiently stable coordination bridge with one of the four oxo groups of the perchlorate anion. Although thermodynamically a mild reductant, Fe2+ ion exhibits a stronger trend to remain coordinated by water molecules to form the corresponding hexa-aquo complex in solution. The high activation energy of the cation binding with perchlorate to form a transient inner sphere complex more favourable to electron transfer considerably hinders the redox reaction. The redox reaction rate is limited by the formation of a favorable activated complex involving an oxo-bridge between the perchlorate anion and the metallic cation. It depends on the molecular orbital rearrangement (HOMO and LUMO orbitals) necessary for a fast oxygen atom transfer (OAT) and the associated electron transfer as studied experimentally by Henry Taube (1983 Nobel Prize in Chemistry) and theoretically by Rudolph A. Marcus (1992 Nobel Prize in Chemistry), both awarded for their respective works on the mechanisms of electron-transfer reactions with metal complexes and in chemical systems. In contrast to the Fe2+ cations which remain unoxidized in deaerated perchlorate aqueous solutions free of dissolved oxygen, other cations such as Ru(II) and Ti(III) can form a more stable bridge between the metal centre and one of the oxo groups of ClO−4. In the inner sphere electron transfer mechanism to observe the perchlorate reduction, the ClO−4 anion must quickly transfer an oxygen atom to the reducing cation. When it is the case, metallic cations can readily reduce perchlorate in solution. Ru(II) can reduce ClO−4 to ClO−3, while V(II), V(III), Mo(III), Cr(II) and Ti(III) can reduce ClO−4 to Cl−. Some metal complexes, especially those of rhenium, and some metalloenzymes can catalyze the reduction of perchlorate under mild conditions. Perchlorate reductase (see below), a molybdoenzyme, also catalyzes the reduction of perchlorate. Both the Re- and Mo-based catalysts operate via metal-oxo intermediates. Over 40 phylogenetically and metabolically diverse microorganisms capable of growth using perchlorate as an electron acceptor have been isolated since 1996. Most originate from the Pseudomonadota, but others include the Bacillota, Moorella perchloratireducens and Sporomusa sp., and the archaeon Archaeoglobus fulgidus. With the exception of A. fulgidus, microbes that grow via perchlorate reduction utilize the enzymes perchlorate reductase and chlorite dismutase, which collectively take perchlorate to chloride. In the process, free oxygen (O2) is generated. Natural abundance Perchlorate is created by lightning discharges in the presence of chloride. Perchlorate has been detected in rain and snow samples from Florida and Lubbock, Texas. It is also present in Martian soil. Naturally occurring perchlorate at its most abundant can be found commingled with deposits of sodium nitrate in the Atacama Desert of northern Chile. These deposits have been heavily mined as sources for nitrate-based fertilizers. Chilean nitrate is in fact estimated to be the source of around 81,000 tonnes (89,000 tons) of perchlorate imported to the U.S. (1909–1997). Results from surveys of ground water, ice, and relatively unperturbed deserts have been used to estimate a 100,000 to 3,000,000 tonnes (110,000 to 3,310,000 tons) "global inventory" of natural perchlorate presently on Earth. Perchlorate was detected in Martian soil at the level of ~0.6% by weight. It was shown that at the Phoenix landing site it was present as a mixture of 60% Ca(ClO4)2 and 40% Mg(ClO4)2. These salts, formed from perchlorates, act as antifreeze and substantially lower the freezing point of water. Based on the temperature and pressure conditions on present-day Mars at the Phoenix lander site, conditions would allow a perchlorate salt solution to be stable in liquid form for a few hours each day during the summer. The possibility that the perchlorate was a contaminant brought from Earth was eliminated by several lines of evidence. The Phoenix retro-rockets used ultra pure hydrazine and launch propellants consisting of ammonium perchlorate or ammonium nitrate. Sensors on board Phoenix found no traces of ammonium nitrate, and thus the nitrate in the quantities present in all three soil samples is indigenous to the Martian soil. Perchlorate is widespread in Martian soils at concentrations between 0.5 and 1%. At such concentrations, perchlorate could be an important source of oxygen, but it could also become a critical chemical hazard to astronauts. In 2006, a mechanism was proposed for the formation of perchlorates that is particularly relevant to the discovery of perchlorate at the Phoenix lander site. It was shown that soils with high concentrations of chloride converted to perchlorate in the presence of titanium dioxide and sunlight/ultraviolet light. The conversion was reproduced in the lab using chloride-rich soils from Death Valley. Other experiments have demonstrated that the formation of perchlorate is associated with wide band gap semiconducting oxides. In 2014, it was shown that perchlorate and chlorate can be produced from chloride minerals under Martian conditions via UV using only NaCl and silicate. Further findings of perchlorate and chlorate in the Martian meteorite EETA79001 and by the Mars Curiosity rover in 2012-2013 support the notion that perchlorates are globally distributed throughout the Martian surface. With concentrations approaching 0.5% and exceeding toxic levels on Martian soil, Martian perchlorates would present a serious challenge to human settlement, as well as microorganisms. On the other hand, the perchlorate would provide a convenient source of oxygen for the settlements. On September 28, 2015, NASA announced that analyses of spectral data from the Compact Reconnaissance Imaging Spectrometer for Mars instrument (CRISM) on board the Mars Reconnaissance Orbiter from four different locations where recurring slope lineae (RSL) are present found evidence for hydrated salts. The hydrated salts most consistent with the spectral absorption features are magnesium perchlorate, magnesium chlorate and sodium perchlorate. The findings strongly support the hypothesis that RSL form as a result of contemporary water activity on Mars. Contamination in environment Perchlorates are of concern because of uncertainties about toxicity and health effects at low levels in drinking water, impact on ecosystems, and indirect exposure pathways for humans due to accumulation in vegetables. They are water-soluble, exceedingly mobile in aqueous systems, and can persist for many decades under typical groundwater and surface water conditions. Perchlorates are used mostly in rocket propellants but also in disinfectants, bleaching agents, and herbicides. Perchlorate contamination is caused during both the manufacture and ignition of rockets and fireworks. Fireworks are also a source of perchlorate in lakes. Removal and recovery methods of these compounds from explosives and rocket propellants include high-pressure water washout, which generates aqueous ammonium perchlorate. In 2000, perchlorate contamination beneath the former flare manufacturing plant Olin Corporation Flare Facility, Morgan Hill, California was first discovered several years after the plant had closed. The plant had used potassium perchlorate as one of the ingredients during its 40 years of operation. By late 2003, the State of California and the Santa Clara Valley Water District had confirmed a groundwater plume currently extending over nine miles through residential and agricultural communities.[citation needed] The California Regional Water Quality Control Board and the Santa Clara Valley Water District have engaged[when?] in a major outreach effort, a water well testing program has been underway for about 1,200 residential, municipal, and agricultural wells. Large ion exchange treatment units are operating in three public water supply systems which include seven municipal wells with perchlorate detection. The potentially responsible parties, Olin Corporation and Standard Fuse Incorporated, have been supplying bottled water to nearly 800 households with private wells,[when?] and the Regional Water Quality Control Board has been overseeing cleanup efforts. The source of perchlorate in California was mainly attributed to two manufacturers in the southeast portion of the Las Vegas Valley in Nevada, where perchlorate has been produced for industrial use. This led to perchlorate release into Lake Mead in Nevada and the Colorado River which affected regions of Nevada, California and Arizona, where water from this reservoir is used for consumption, irrigation and recreation for approximately half the population of these states. Lake Mead has been attributed[when?] as the source of 90% of the perchlorate in Southern Nevada's drinking water. Based on sampling, perchlorate has been affecting 20 million people, with highest detection in Texas, southern California, New Jersey, and Massachusetts, but intensive sampling of the Great Plains and other middle state regions may lead to revised estimates with additional affected regions. An action level of 18 μg/L has been adopted[when?] by several affected states. In 2001, the chemical was detected at levels as high as 5 μg/L at Joint Base Cape Cod (formerly Massachusetts Military Reservation), over the Massachusetts then state regulation of 2 μg/L. As of 2009, low levels of perchlorate had been detected in both drinking water and groundwater in 26 states in the U.S., according to the Environmental Protection Agency (EPA). In 2004, the chemical was found in cow's milk in California at an average level of 1.3 parts per billion (ppb, or μg/L), which may have entered the cows through feeding on crops exposed to water containing perchlorates. A 2005 study suggested human breast milk had an average of 10.5 μg/L of perchlorate. In some places, there is no clear source of perchlorate, and it may be naturally occurring. Natural perchlorate on Earth was first identified in terrestrial nitrate deposits /fertilizers of the Atacama Desert in Chile as early as the 1880s and for a long time considered a unique perchlorate source. The perchlorate released from historic use of Chilean nitrate based fertilizer which the U.S.imported by the hundreds of tons in the early 19th century can still be found in some groundwater sources of the United States, for example Long Island, New York. Recent improvements in analytical sensitivity using ion chromatography based techniques have revealed a more widespread presence of natural perchlorate, particularly in subsoils of Southwest USA, salt evaporites in California and Nevada, Pleistocene groundwater in New Mexico, and even present in extremely remote places such as Antarctica. The data from these studies and others indicate that natural perchlorate is globally deposited on Earth with the subsequent accumulation and transport governed by the local hydrologic conditions. Despite its importance to environmental contamination, the specific source and processes involved in natural perchlorate production remain poorly understood. Laboratory experiments in conjunction with isotopic studies have implied that perchlorate may be produced on earth by oxidation of chlorine species through pathways involving ozone or its photochemical products. Other studies have suggested that perchlorate can also be formed by lightning activated oxidation of chloride aerosols (e.g., chloride in sea salt sprays), and ultraviolet or thermal oxidation of chlorine (e.g., bleach solutions used in swimming pools) in water. Although perchlorate as an environmental contaminant is usually associated with the manufacture, storage, and testing of solid rocket motors, contamination of perchlorate has been focused as a side effect of the use of natural nitrate fertilizer and its release into ground water. The use of naturally contaminated nitrate fertilizer contributes to the infiltration of perchlorate anions into the ground water and threaten the water supplies of many regions in the US. One of the main sources of perchlorate contamination from natural nitrate fertilizer use was found to come from the fertilizer derived from Chilean caliche (calcium carbonate), because Chile has rich source of naturally occurring perchlorate anion. Perchlorate concentration was the highest in Chilean nitrate, ranging from 3.3 to 3.98%. Perchlorate in the solid fertilizer ranged from 0.7 to 2.0 mg g−1, variation of less than a factor of 3 and it is estimated that sodium nitrate fertilizers derived from Chilean caliche contain approximately 0.5–2 mg g−1 of perchlorate anion. The direct ecological effect of perchlorate is not well known; its impact can be influenced by factors including rainfall and irrigation, dilution, natural attenuation, soil adsorption, and bioavailability. Quantification of perchlorate concentrations in nitrate fertilizer components via ion chromatography revealed that in horticultural fertilizer components contained perchlorate ranging between 0.1 and 0.46%. Environmental cleanup There have been many attempts to eliminate perchlorate contamination. Current remediation technologies for perchlorate have downsides of high costs and difficulty in operation. Thus, there have been interests in developing systems that would offer economic and green alternatives. Several technologies can remove perchlorate, via treatments ex situ (away from the location) and in situ (at the location). Ex situ treatments include ion exchange using perchlorate-selective or nitrite-specific resins, bioremediation using packed-bed or fluidized-bed bioreactors, and membrane technologies via electrodialysis and reverse osmosis. In ex situ treatment via ion exchange, contaminants are attracted and adhere to the ion exchange resin because such resins and ions of contaminants have opposite charge. As the ion of the contaminant adheres to the resin, another charged ion is expelled into the water being treated, in which then ion is exchanged for the contaminant. Ion exchange technology has advantages of being well-suitable for perchlorate treatment and high volume throughput but has a downside that it does not treat chlorinated solvents. In addition, ex situ technology of liquid phase carbon adsorption is employed, where granular activated carbon (GAC) is used to eliminate low levels of perchlorate and pretreatment may be required in arranging GAC for perchlorate elimination. In situ treatments, such as bioremediation via perchlorate-selective microbes and permeable reactive barrier, are also being used to treat perchlorate. In situ bioremediation has advantages of minimal above-ground infrastructure and its ability to treat chlorinated solvents, perchlorate, nitrate, and RDX simultaneously. However, it has a downside that it may negatively affect secondary water quality. In situ technology of phytoremediation could also be utilized, even though perchlorate phytoremediation mechanism is not fully founded yet. Bioremediation using perchlorate-reducing bacteria, which reduce perchlorate ions to harmless chloride, has also been proposed. Health effects Perchlorate is a potent competitive inhibitor of the thyroid sodium-iodide symporter. Thus, it has been used to treat hyperthyroidism since the 1950s. At very high doses (70,000–300,000 ppb) the administration of potassium perchlorate was considered the standard of care in the United States, and remains the approved pharmacologic intervention for many countries. In large amounts perchlorate interferes with iodine uptake into the thyroid gland. In adults, the thyroid gland helps regulate the metabolism by releasing hormones, while in children, the thyroid helps in proper development. The NAS, in its 2005 report, Health Implications of Perchlorate Ingestion, emphasized that this effect, also known as Iodide Uptake Inhibition (IUI) is not an adverse health effect. However, in January 2008, California's Department of Toxic Substances Control stated that perchlorate is becoming a serious threat to human health and water resources. In 2010, the EPA's Office of the Inspector General determined that the agency's own perchlorate reference dose (RfD) of 24.5 parts per billion protects against all human biological effects from exposure, as the federal government is responsible for all US military base groundwater contamination. This finding was due to a significant shift in policy at the EPA in basing its risk assessment on non-adverse effects such as IUI instead of adverse effects. The Office of the Inspector General also found that because the EPA's perchlorate reference dose is conservative and protective of human health further reducing perchlorate exposure below the reference dose does not effectively lower risk. Because of ammonium perchlorate's adverse effects upon children, Massachusetts set its maximum allowed limit of ammonium perchlorate in drinking water at 2 parts per billion (2 ppb = 2 micrograms per liter). Perchlorate affects only thyroid hormone. Because it is neither stored nor metabolized, effects of perchlorate on the thyroid gland are reversible, though effects on brain development from lack of thyroid hormone in fetuses, newborns, and children are not. Toxic effects of perchlorate have been studied in a survey of industrial plant workers who had been exposed to perchlorate, compared to a control group of other industrial plant workers who had no known exposure to perchlorate. After undergoing multiple tests, workers exposed to perchlorate were found to have a significant systolic blood pressure rise compared to the workers who were not exposed to perchlorate, as well as a significant decreased thyroid function compared to the control workers. A study involving healthy adult volunteers determined that at levels above 0.007 milligrams per kilogram per day (mg/(kg·d)), perchlorate can temporarily inhibit the thyroid gland's ability to absorb iodine from the bloodstream ("iodide uptake inhibition", thus perchlorate is a known goitrogen). The EPA converted this dose into a reference dose of 0.0007 mg/(kg·d) by dividing this level by the standard intraspecies uncertainty factor of 10. The agency then calculated a "drinking water equivalent level" of 24.5 ppb by assuming a person weighs 70 kg (150 lb) and consumes 2 L (0.44 imp gal; 0.53 US gal) of drinking water per day over a lifetime.[needs update] In 2006, a study reported a statistical association between environmental levels of perchlorate and changes in thyroid hormones of women with low iodine. The study authors were careful to point out that hormone levels in all the study subjects remained within normal ranges. The authors also indicated that they did not originally normalize their findings for creatinine, which would have essentially accounted for fluctuations in the concentrations of one-time urine samples like those used in this study. When the Blount research was re-analyzed with the creatinine adjustment made, the study population limited to women of reproductive age, and results not shown in the original analysis, any remaining association between the results and perchlorate intake disappeared. Soon after the revised Blount Study was released, Robert Utiger, a doctor with the Harvard Institute of Medicine, testified before the US Congress and stated: "I continue to believe that that reference dose, 0.007 milligrams per kilo (24.5 ppb), which includes a factor of 10 to protect those who might be more vulnerable, is quite adequate." In 2014, a study was published, showing that environmental exposure to perchlorate in pregnant women with hypothyroidism is associated with a significant risk of low IQ in their children. Some studies suggest that perchlorate has toxic effects against lungs (pulmonary toxicity) as well. Studies have been performed on rabbits where perchlorate has been injected into the trachea. The lung tissue was removed and analyzed, and it was found that perchlorate injected lung tissue showed several adverse effects when compared to the control group that had been intratracheally injected with saline. Adverse effects included inflammatory infiltrates, alveolar collapse, subpleural thickening, and lymphocyte proliferation. In the early 1960s, potassium perchlorate used to treat Graves' disease was implicated in the development of aplastic anemia—a condition where the bone marrow fails to produce new blood cells in sufficient quantity—in thirteen patients, seven of whom died. Subsequent investigations have indicated the connection between administration of potassium perchlorate and development of aplastic anemia to be "equivocable at best", which means that the benefit of treatment, if it is the only known treatment, outweighs the risk, and it appeared a contaminant poisoned the 13. Regulation in the U.S. In 1998, perchlorate was included in the U.S. EPA Contaminant Candidate List, primarily due to its detection in California drinking water. In 2002, the EPA completed its draft toxicological review of perchlorate and proposed a reference dose of 0.00003 milligrams per kilogram per day (mg/kg/day) based primarily on studies that identified neurodevelopmental deficits in rat pups. These deficits were linked to maternal exposure to perchlorate. In 2003, a federal district court in California found that the Comprehensive Environmental Response, Compensation and Liability Act applied, because perchlorate is ignitable, and therefore was a "characteristic" hazardous waste. Subsequently, the U.S. National Research Council of the National Academy of Sciences (NAS) reviewed the health implications of perchlorate, and in 2005 proposed a much higher reference dose of 0.0007 mg/kg/day based primarily on a 2002 study by Greer et al. During that study, 37 adult human subjects were split into four exposure groups exposed to 0.007 (7 subjects), 0.02 (10 subjects), 0.1 (10 subjects), and 0.5 (10 subjects) mg/kg/day. Significant decreases in iodide uptake were found in the three highest exposure groups. Iodide uptake was not significantly reduced in the lowest exposed group, but four of the seven subjects in this group experienced inhibited iodide uptake. In 2005, the RfD proposed by NAS was accepted by EPA and added to its integrated risk information system (IRIS). Although there has generally been consensus with the Greer et al. study, there has been no consensus with regard to developing a perchlorate RfD. One of the key differences results from how the point of departure is viewed (i.e., NOEL or "lowest-observed-adverse-effect level", LOAEL), or whether a benchmark dose should be used to derive the RfD. Defining the point of departure as a NOEL or LOAEL has implications when it comes to applying appropriate safety factors to the point of departure to derive the RfD. In early 2006, EPA issued a "Cleanup Guidance" and recommended a Drinking Water Equivalent Level (DWEL) for perchlorate of 24.5 μg/L.[citation needed] Both DWEL and Cleanup Guidance were based on a 2005 review of the existing research by the National Academy of Sciences (NAS). Lacking a federal drinking water standard, several states subsequently published their own standards for perchlorate including Massachusetts in 2006[citation needed] and California in 2007. Other states, including Arizona, Maryland, Nevada, New Mexico, New York, and Texas have established non-enforceable, advisory levels for perchlorate.[citation needed] In 2008, EPA issued an interim drinking water health advisory for perchlorate and with it a guidance and analysis concerning the impacts on the environment and drinking water. California also issued guidance[when?] regarding perchlorate use. Both the Department of Defense and some environmental groups voiced questions about the NAS report,[citation needed] but no credible science has emerged to challenge the NAS findings.[citation needed] In February 2008, the U.S. Food and Drug Administration (FDA) reported that U.S. toddlers on average were being exposed to more than half of EPA's safe dose from food alone. In March 2009, a Centers for Disease Control study found 15 brands of infant formula contaminated with perchlorate and that combined with existing perchlorate drinking water contamination, infants could be at risk for perchlorate exposure above the levels considered safe by EPA. In 2010, the Massachusetts Department of Environmental Protection set a 10 fold lower RfD (0.07 μg/kg/day) than the NAS RfD using a much higher uncertainty factor of 100. They also calculated an Infant drinking water value, which neither US EPA nor CalEPA had done. On February 11, 2011, EPA determined that perchlorate meets the Safe Drinking Water Act criteria for regulation as a contaminant. The agency found that perchlorate may have an adverse effect on the health of persons and is known to occur in public water systems with a frequency and at levels that it presents a public health concern. Since then EPA has continued to determine what level of contamination is appropriate. EPA prepared extensive responses to submitted public comments.[better source needed] In 2016, the Natural Resources Defense Council (NRDC) filed a lawsuit to accelerate EPA's regulation of perchlorate. In 2019, EPA proposed a Maximum Contaminant Level of 0.056 mg/L for public water systems. On June 18, 2020, EPA announced that it was withdrawing its 2011 regulatory determination and its 2019 proposal, stating that it had taken "proactive steps" with state and local governments to address perchlorate contamination. In September 2020 NRDC filed suit against EPA for its failure to regulate perchlorate, and stated that 26 million people may be affected by perchlorate in their drinking water. On March 31, 2022, the EPA announced that a review confirmed its 2020 decision. Following the NRDC lawsuit, in 2023 the US Court of Appeals for the DC Circuit ordered EPA to develop a perchlorate standard for public water systems. EPA stated that it will publish a proposed standard for perchlorate in 2025, and issue a final rule in 2027. Covalent perchlorates Although typically found as a non-coordinating anion, a few metal complexes are known. Hexaperchloratoaluminate and tetraperchloratoaluminate are strong oxidising agents. Several perchlorate esters are known. For example, methyl perchlorate is a high energy material that is a strong alkylating agent. Chlorine perchlorate is a covalent inorganic analog. Safety As discussed above, iodide is competitor in the thyroid glands. In the presence of reductants, perchlorate forms potentially explosive mixtures. The PEPCON disaster destroyed a production plant for ammonium perchlorate when a fire caused the ammonium perchlorate stored on site to react with the aluminum that the storage tanks were constructed with and explode. References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Shin_Megami_Tensei:_Persona] | [TOKENS: 9260]
Contents Persona (series) Persona,[Jp. 1] previously marketed as Shin Megami Tensei: Persona outside of Japan, is a video game franchise primarily developed by Atlus and owned by Sega.[a] Centered around a series of Japanese role-playing video games, Persona is a spin-off from Atlus' Megami Tensei franchise. The first entry in the series, Revelations: Persona,[b] was released in 1996 for the PlayStation. The series has seen several more games since, with the most recent main entry being 2024's Persona 3 Reload, a remake of the 2006 game Persona 3. Persona began as a spin-off based on the positively-received high school setting of Shin Megami Tensei If... (1994). Persona's core features include a group of students as the main cast, a silent protagonist similar to the mainline Megami Tensei franchise, and combat using Personas. Beginning with Persona 3 in 2006, the main series came to focus more on, and become renowned for, the immersive social simulation elements that came with the addition of Social Links, which are directly linked to how Personas evolve. Character designs are by series co-creator Kazuma Kaneko (Persona and the Persona 2 duology) and Shigenori Soejima (Persona 3 onwards). Its overall theme is the exploration of the human psyche and how the characters find their true selves. The series' recurring concepts and design elements draw on Jungian psychology, psychological personas and tarot cards, along with religious, mythological, and literary themes and influences. Revelations: Persona was the first role-playing Megami Tensei game to be released outside of Japan. Beginning with Persona 2: Eternal Punishment, the English localizations began to remain faithful to the Japanese versions at the insistence of Atlus. The series is highly popular internationally, becoming the best-known Megami Tensei spin-off and establishing Atlus and the Megami Tensei franchise in North America. Following the release of Persona 3 and 4, the series also established a strong following in Europe. The series has since gone on to sell over 27 million copies worldwide, outselling its parent franchise. There have been numerous adaptations, including anime series, films, novelizations, manga, stage plays, radio dramas, art books, and musical concerts. Games Persona 3 received a Japan-exclusive spin-off titled Persona 3: The Night Before;[Jp. 2] it follows a similar cycle of daytime activities and night time combat as the original game, with one player being chosen as the party leader each night. After its closure in 2008, a new free-to-play browser game titled Persona Ain Soph[Jp. 3] was released that year; the gameplay focused on players fusing Personas and confronting a threat known as the Qliphoth. Staying exclusive to Japan, it closed down in June 2010. A fighting game sequel to Persona 4, Persona 4 Arena, was released in arcades in Japan in 2012. Console versions were released in 2012 in Japan and North America, and 2013 in Europe. A sequel, Persona 4 Arena Ultimax, was similarly released in Japanese arcades in 2013, then released in 2014 in all regions for consoles. A standalone spin-off for the Nintendo 3DS, Persona Q: Shadow of the Labyrinth, was released worldwide in 2014; it features the full casts of Persona 3 and 4, and is classed by Atlus as an official entry in the Persona canon. A sequel, Persona Q2: New Cinema Labyrinth, saw the addition of the Persona 5 characters and was released in Japan in 2018 and worldwide in 2019. A rhythm game set after the events of Persona 4 Arena Ultimax, Persona 4: Dancing All Night, was released worldwide in 2015. Two follow-ups to Dancing All Night, Persona 3: Dancing in Moonlight and Persona 5: Dancing in Starlight, were released together in 2018. A Dynasty Warriors-style action role-playing sequel to Persona 5, Persona 5 Strikers, was released in Japan in 2020 and worldwide the following year. A tactics spin-off of Persona 5, Persona 5 Tactica, was released in November 2023. Several Persona mobile games have been made in partnership with other Japanese mobile companies such as BBMF. Their first partnership was in 2006 with the development and release of Megami Ibunroku Persona: Ikū no Tō-hen, a 3D dungeon crawler set during the events of the first Persona game. The companies later collaborated on two mobile games based on the Persona 2 games: Persona 2: Innocent Sin - Lost Memories[Jp. 4] in 2007, and Persona 2: Eternal Punishment - Infinity Mask[Jp. 5] in 2009. Both games carried over the basic gameplay functions of the original games tailored for mobile phones. Many mobile spin-offs are related to Persona 3: there is an RPG side-story titled Persona 3 Em,[Jp. 6] an action game prequel set ten years prior to Persona 3 titled Aegis: The First Mission, and an alternate version of Persona 3 featuring different characters titled Persona 3 Social.[Jp. 7] Multiple Persona 3-themed puzzle games have also been developed. An online mobile RPG set around the high school featured in Persona 3, titled Persona Mobile Online,[Jp. 8] was released in 2009. Persona 4 likewise received a mobile card game spin-off, titled Persona 4 The Card Battle.[Jp. 9] A mobile spin-off for Persona 5 entitled Persona 5: The Phantom X was released in June 2025, developed by Black Wings Game Studio and published by Perfect World Games. Common elements The gameplay of the Persona series revolves around combat against various enemy types: Demons, Shadows and Personas. Main combat takes place during dungeon crawling segments within various locations. The way battles initiate varies between random encounters (Persona, Persona 2) or running into models representing enemy groups (Persona 3 onwards). Battles are governed by a turn-based system, where the player party and enemies each attack the opposing side. Actions in battle include standard physical attacks using short-range melee or long-range projectile weapons, magical attacks, using items, guarding, and under certain conditions escaping from battles. During battle, either side can strike an enemy's weakness, which deals more damage than other attacks. Starting with Persona 3, landing a critical hit grants the character an extra turn. If all enemies are knocked down by critical hits, the party can perform an "All Out Attack", with all party members attacking at once and dealing high damage. Each party member is manually controlled by the player in all but one Persona title: in Persona 3, all the party apart from the main character are controlled by an AI-based command system. The general gameplay has remained consistent across all Persona games. Each Persona game also includes unique elements. In Persona, battles take place on a grid-based battlefield, with characters' and enemies' movements dictated by their placement on the battlefield. This system was abandoned for the Persona 2 games: the party has free movement across the battlefield, and is assigned a set of moves which can be changed in the menu during and in between battles. In Persona and Persona 3, there is a lunar phase tied to gameplay, time progression, and the plot. In Persona 4, this was changed to a weather-based system, where changes in the weather keyed to the story affected enemy behavior. Persona 5 introduces elements such as platforming and stealth gameplay to dungeon exploration. The All-Out Attack can be initiated in a "Hold-Up" session, triggered when all enemies are knocked down. A defining aspect of the series is the use of the "Persona", which are physical manifestations of a person's psyche and subconscious used for combat. The main Personas for the cast used up to Persona 3 were inspired by Greco-Roman mythology. Persona 4's were based on Japanese deities; while Persona 5 used characters inspired by fictional and historical outlaws and thieves. The summoning ritual for Personas in battle varies throughout the series: in early games, the party gains the ability to summon through a short ritual after playing a parlor game; in Persona 3, they fire a gun-like device called an Evoker at their head to overcome their cowardice; in Persona 4, they summon their Personas by destroying Tarot cards; in Persona 5, they are summoned through the removal of the characters' masks. Personas are used for types of physical attack and magical attacks, along with actions such as healing and curing or inflicting status effects. For all Persona games, all playable characters start out with an initial Persona, which can evolve into other Personas through story-based events and use during battle. In multiple Persona games, two or more Personas can be summoned at once to perform a powerful Fusion Spell. In Persona 3, 4 and 5, only the main character can wield and change between multiple Personas; the other characters use a single Persona. During the course of the game, the player acquires more Personas through a system of Skill Cards, represented by Major Arcana Tarot cards. Each skill card represents a different Persona family, which in turn hold their own abilities inherent to that family. Multiple Personas can be fused together to create a new Persona with improved and inherited abilities: these range from fusing two Personas in the Persona 2 duology to up to twelve in Persona 4. Starting with Persona 3, the main protagonist of each game has an ability known as "Wild Card", an ability to summon multiple Personas represented by the Fool Arcana. "Social Links" is a system introduced in Persona 3 that is a form of character interaction tied to the growth of Personas. During their time outside battle, the main character can interact with and grow a particular Social Link, which acts as an independent character growth system tied to a Persona family or Arcanum. As the main character's relationship with the character representing a Social Link grows, its rank is raised and more powerful Personas related to the Social Link's assigned Arcanum can be summoned and fused. Attributes related to the main character's social life can also be used to improve their Persona abilities, such as their academic abilities and social aptitude. An enhanced version of the Social Link system, known as "Confidants", appeared in Persona 5. In Persona, the Persona 2 duology, and Persona 5, there is also a "Negotiation" mechanic carried over from the Megami Tensei series, in which player characters can talk with enemies and provoke certain actions depending on their dialogue choices. Some responses yield Skill Cards for use in creating new Personas. Negotiation was removed from Persona 3 and Persona 4, although Atlus staff considered the Social Link system and aspects of Persona fusion to be a "disguised" version of it. In Persona 5, they can be initiated during a "Hold Up" session; Shadows can be persuaded to join the party as a new Persona if the Negotiation is successful, the player does not already have them, and is at an appropriate experience level. The Persona series takes place in modern-day Japan and focuses on a group of high school students, with the exception to this being Eternal Punishment, which focused on a group of adults. The setting has been described as urban fantasy, with extraordinary events happening in otherwise normal locations. The typical setting used is a city, with a noted exception being the rural town setting of Persona 4. Although they are typically stand-alone games that only share thematic elements, the Persona games share a continuity, with elements from previous games turning up in later ones. Persona and the Persona 2 games shared narrative elements which were concluded with Eternal Punishment, so Persona 3 started out with a fresh setting and characters. The first in the series is Persona, set in the year 1996. This is followed by the events of Innocent Sin and Eternal Punishment in 1999. At the end of Innocent Sin, the main characters rewrite events to avert the destruction of Earth, creating the Eternal Punishment reality, with the original reality becoming an isolated Other Side. Persona 3 and subsequent games stem from Eternal Punishment. Persona 3 is set from 2009 to 2010, and Persona 4 is set from 2011 to 2012. The Persona 4 Arena games and Dancing All Night take place in the months following Persona 4. In contrast, Persona 5 is set in a non-specific year referred to as "20XX", while Strikers is set several months after the events of Persona 5. The Persona Q series takes place in a separate enclosed world in which the characters of Persona 3, 4, and 5 are drawn into from their respective time periods. Dialogue in Q2 also suggests that Persona 5 takes place only a few years after 4. A central concept for the series is the collective unconscious, a place generated by the hearts of humanity and from which Personas are born. According to the official Persona Club P3 book, the collective unconscious was generated by the primitive life on Earth as a means of containing the spiritual essence of Nyx, a space-born being whose presence would cause the death of all life on Earth. Her body was damaged by the impact and became the moon, while her psyche was left on the surface and locked away at the heart of the collective unconscious. The fragments of Nyx's psyche, known as "Shadows", are both a threat and a crucial part of humanity's existence. To further help defend against hostile Shadows, people generated the deities that exist within the collective unconscious, many of which manifest as Personas. Nyx appears in Persona 3 as the antagonist. The major dungeon locations in each game are generated by the latent wishes and desires of humans and are generally used by another force for their own ends. A recurring location appearing in most of the games is the "Velvet Room", a place between reality and unconsciousness created by Philemon that changes form depending on the psyche of its current guest. Its inhabitants, led by an enigmatic old man called Igor, aid the main characters by helping them hone their Persona abilities. While normally inaccessible and invisible to all except those who forged a contract with the room, others can be summoned alongside the guest, intentionally or otherwise. The main character of each Persona game is a silent protagonist representing the player, with a manner described by the series' director as "silent and cool". When the writer for new story content in Eternal Punishment's PSP version wished for the main character to have spoken dialogue, this was vetoed as it went against the series tradition. Two recurring characters generated by the collective unconscious are Philemon and Nyarlathotep, the respective representatives of the positive and negative traits of humanity. In Innocent Sin, the two reveal that they are engaged in a proxy contest as to whether humanity can embrace its contradictory feelings and find a higher purpose before destroying itself. Philemon makes appearances in later Persona games as a blue butterfly. Many of the major antagonists in the series are personifications of death generated by the human subconscious. The central theme of the Persona series is exploration of the human psyche and the main characters discovering their true selves. The stories generally focus on the main cast's interpersonal relationships and psychologies. There is also an underlying focus on "the human soul". Many of the concepts and characters within the series (Personas, Shadows, Philemon) use Jungian psychology and archetypes. A recurring motif are the "masks" people wear during everyday life, which ties back to their Personas. This motif was more overtly expressed in Persona 5 through the main casts' use of masks in their thief guises. The dual lives of the main casts are directly inspired by these themes. Each game also includes specific themes and motifs. Persona 2 focuses on the effect of rumors on the fabric of reality (referred to by the developers as "the power of Kotodama"); Persona 3 employs themes involving depression and the darkness within people; Persona 4 focuses on how gossip and the media influences people's views of others; and Persona 5 shows how the main characters pursue personal freedom in a restrictive modern society. Zhang Cheng from The Paper thought the Persona 3, 4 and 5 regarded emotions and bonds as the ultimate weapon against alienation in postmodern society, and calls on everyone to become positive people. A recurring element in the earlier entries is ""The Butterfly Dream"", a famous story by the Chinese philosopher Zhuang Zhou. It ties in with the series' themes, and also with Philemon's frequent appearances as a butterfly. Philemon's original appearance was based on Zhuang Zhou. The character Nyarlathotep is based on the character of the same name from H. P. Lovecraft's Cthulhu Mythos, and the Mythos as a whole is frequently referenced in Persona 2. The Velvet Room was based on the Black Lodge from Twin Peaks, while Igor and his assistants are all named after characters from Mary Shelley's novel Frankenstein and its adaptations. Development The Persona series was first conceived after the release of Shin Megami Tensei If... for the Super Famicom. As the high school setting of If... had been positively received, Atlus decided to create a dedicated subseries focusing on the inner struggles of young adults. The focus on high school life was also decided upon due to the experiences of the series' creators, Kouji Okada and Kazuma Kaneko: according to them, as nearly everyone experiences being a student at some point in their lives, it was something everyone could relate to, representing a time of both learning and personal freedom. In their view, this approach helped players accept the series' themes and the variety of ideas included in each title. Kaneko in particular tried to recreate his experiences and the impact it had on him during his time with the series. The main concept behind the first game was a Megami Tensei title that was more approachable for new and casual players than the main series. The abundance of casual games on the PlayStation reinforced this decision. The game's title, Megami Ibunroku,[Jp. 10] represented the game's status as a direct spin-off from the series. It was later dropped to further define Persona as a standalone series. After the success of Persona, Innocent Sin began development, retaining many of the original staff. During the writing of Innocent Sin, it was decided that the world of Persona 2 needed a different perspective than that of the current protagonist. This decision laid the groundwork for Eternal Punishment. Following this, the Persona series entered a hiatus while focus turned to other projects, including Shin Megami Tensei III: Nocturne. The conceptual Persona 3 was submitted to Atlus in 2003 by Katsura Hashino, who had worked as a designer for multiple Megami Tensei games and had been the director for Nocturne. Gaining Atlus' approval of the concept, development started in the same year, after the completion of Nocturne and the Digital Devil Saga duology. Persona 3 was part of Atlus' push to expand their player base outside of Japan. Ideas were being passed around about Persona 4, but the game did not begin official development until after the release of Persona 3. Preparations for Persona 5's development began in 2010. The team decided to shift towards more challenging story themes, saying that the shift would be more drastic than that experienced with Persona 3. Persona 4 Arena and its sequel were the first non-RPG collaborative project in the series: its success inspired the creation of both Persona Q and Dancing All Night. The first three Persona games were developed by Atlus' internal R&D1 studio, the studio responsible for the mainline Megami Tensei games. Beginning with Persona 3, a dedicated team originally referred to as the 2nd Creative Production Department began handling development for the series. The team was later renamed P-Studio in 2012. Hashino remained in charge of the studio until the Japanese release of Persona 5 in 2016, when he moved to found a new department, Studio Zero, to work on non-Persona projects. Aside from Atlus, other developers have helped develop entries in the Persona series. During the pre-production stage of Persona 4 Arena, Hashino approached Arc System Works after being impressed by their work on the BlazBlue series. For Dancing All Night, development was initially handled by Dingo, but due to quality concerns Atlus took over primary development with Dingo being retained as a supporting developer. The two character artists for the Persona series are Kazuma Kaneko, a central artist in the main Megami Tensei series who designed characters for the first three Persona games, and Shigenori Soejima, who worked in a secondary capacity alongside Kaneko and took Kaneko's place as the character designer from Persona 3 onwards. While designing the characters for Persona, Kaneko was inspired by multiple notable celebrities and fictional characters of the time, along with members of Atlus staff. In Persona and Innocent Sin, the main characters all wore the same school uniforms, so Kaneko differentiated them using accessories. For Eternal Punishment, the main cast were adults, so Kaneko needed to rethink his design procedure. Eventually, he adopted the concept of ordinary adults, and gave them designs that would stand out in-game. Soejima's first major work for the series was working on side characters for Persona 2 alongside Kaneko. Kaneko put Soejima in charge of the series' art direction after Persona 2 as Kaneko did not want to imprint his drawing style on the Persona series, and also wanted Soejima to gain experience. Soejima felt a degree of pressure when he was given his new role, as the series had accumulated a substantial following during Kaneko's tenure. In a later interview, Soejima said that although he respected and admired Kaneko, he never consciously imitated the latter's work, and eventually settled into the role of pleasing the fans of the Persona series, approaching character designs with the idea of creating something new rather than referring back to Kaneko's work. For his character designs, Soejima uses real people he has met or seen, looking at what their appearance says about their personality. If his designs come too close to the people he has seen, he does a rough sketch while keeping the personality of the person in mind. For his work on Persona Q, his first time working with a deformed Chibi style due to its links with the Etrian Odyssey series, Soejima took into account what fans felt about the characters. A crucial part of his design technique was looking at what made a character stand out, then adjusting those features so they remained recognizable even with the redesign. Starting with Persona 3, each Persona game has been defined by a different aesthetic and key color. It is one of the first artistic decisions made by the team: Persona 3 has a dark atmosphere and serious characters, so the primary color was chosen as blue to reflect these and the urban setting. In contrast, Persona 4 has a lighter tone and characters but also sports a murder-mystery plot, so the color yellow was chosen to represent both the lighter tones and to evoke a "warning" signal. According to Soejima, blue was the "color of adolescence", and yellow was the "color of happiness". For Persona 5, the color chosen was red, to convey a harsh feeling in contrast to the previous Persona games and tie in with the game's story themes. Its art style was described as a natural evolution from where Persona 4 left off. The music of the Persona series has been handled by multiple composers. The one most associated with the series is Shoji Meguro, who began working on Persona shortly after he joined Atlus in 1995. His very first composition for the game was "Aria of the Soul", the theme for the Velvet Room that became a recurring track throughout the series. During his initial work on the series, Meguro felt restricted by the limited storage space of the PlayStation's disc system, and so when he began composing for Persona 3, which allowed for sound streaming due to increased hardware capacity, he was able to fully express his musical style. His main worry for his music in Persona 3 and 4 was the singers' pronunciation of the English lyrics. He was unable to work on the Persona 2 games as he was tied up with other projects, including Maken X. Meguro also served as the lead composer in Persona 5, using elements of acid jazz, being inspired in particular by British band Jamiroquai, and the game's themes for inspiration to achieve the right mood. The music for Innocent Sin and Eternal Punishment was handled by Toshiko Tasaki, Kenichi Tsuchiya, and Masaki Kurokawa. Tsuchiya had originally done minor work on Persona, and found composing for the games a strenuous experience. Spin-offs, such as the Persona Q and Dancing subseries, are usually handled by other Atlus composers such as Atsushi Kitajoh, Toshiki Konishi, and Ryota Kozuka. Release The series consists of twenty games, not counting re-releases and mobile games. Persona was the first role-playing entry in the Megami Tensei franchise to be released outside of Japan, as previous entries had been considered ineligible due to possibly controversial content. As examples of this content were in a milder form for Persona, the restrictions did not apply. According to Atlus, Persona and its sequel were to test player reactions to the Megami Tensei series outside of Japan. The greater majority of Persona games were either first released on or exclusive to PlayStation platforms. This trend was broken with the release of Persona Q for the 3DS in 2014. All the Persona games have been published by Atlus in Japan and North America. An exception in Japan was the Windows port of Persona, which was published by ASCII Corporation. After 2016, due to Atlus USA's merger with Sega of America, Sega took over North American publishing duties, although the Atlus brand remained intact. Since then, Atlus has been releasing ports of the main Persona games for non-PlayStation platforms, beginning with the release of Persona 4 Golden on Windows in 2020, which marked the first time a numbered entry in the series released for PC worldwide. Sega would also assist Atlus in porting Persona 5 Royal and Persona 3 Portable to Windows, in addition to Nintendo Switch, PlayStation 4, PlayStation 5, Xbox One and Xbox Series X/S throughout 2022 and 2023. Persona 3 Reload, a remake of Persona 3 (2006), was launched in 2024 for PlayStation 4, PlayStation 5, Windows, Xbox One and Xbox Series X/S, making it the first main entry in the franchise to both receive a worldwide simultaneous release, as well as the first to be available on non-PlayStation formats from launch. In a quarterly earnings report from November 2023, SEGA Sammy president Haruki Atami suggested that all future Persona games going forward would follow a similar release and availability cadence in order to meet company expectations of selling at least 5 million units in a new game's first year. Due to the company not having a European branch, Atlus has generally given publishing duties to other third-party publishers with branches in Europe. This frequently results in a gap between North American and European release dates ranging from a few months to a year or more. For Persona 3, Atlus gave publishing duties to Koei. For Persona 4, European publishing was handled by Square Enix. Persona 4 Arena was originally published in Europe by Zen United after a long delay, but the digital rights were eventually returned to Atlus, resulting in the game being removed from PSN. Atlus ended up re-publishing the digital PlayStation version in Europe. They had previously digitally published the PSP port of Persona in Europe and Australia. Arena Ultimax was published in Europe by Sega, who had recently purchased Atlus' parent company. It was speculated that this could lead to a new trend that would shorten the release gap between North America and Europe. A regular publishing partner was Ghostlight, whose relations with Atlus went back to the European release of Nocturne. A more recent partner was NIS America, which published Persona 4 Golden, Persona Q, and Dancing All Night. Atlus' partnership with NIS America ended in 2016, with NIS America citing difficulties with the company since its acquisition by Sega as reasons for the split. As part of their statement, NIS America said that Atlus had become "very picky" about European partners, selecting those which could offer the highest minimal sales guarantee on their products. Sega of America and Atlus USA eventually entered into a partnership with European publishing company Deep Silver to publish multiple games in the region, including Persona 5. The localizations for the Persona series are generally handled by translator Yu Namba of Atlus USA, who also handles localization for multiple other Megami Tensei games. Another prominent staff member was Nich Maragos, who worked with Namba on multiple Persona games until moving to Nintendo of America prior to 2015. The localization of Persona was handled by a small team, which put a lot of pressure on them as they needed to adjust the game for Western audiences: the changes implemented included altering names, changing the appearance of characters, and removing numerous cultural references. An entire alternate main quest was also removed. After Persona, it was decided that future Persona games should be as faithful as possible to their original releases. Namba's first localization project for the series was Eternal Punishment. For the release of Innocent Sin, there was a debate over whether to release it, as it contained potentially controversial content including allusions to Nazism. In the end, due to staff and resource shortages, Innocent Sin was passed over for localization in favor of its sequel Eternal Punishment. Later, when the company developed the PSP ports, the team released the ports of Persona and Innocent Sin overseas so fans attracted by Persona 3 and 4 would be able to easily catch up with the rest of the series. The localization for Persona was completely redone, reverting all the previous altered content and restoring all previously cut content. The port of Eternal Punishment was not localized due to "unusual circumstances", so the company released the original version on PSN instead. For the localizations of Persona 3 and 4, the team incorporated as much of the original content as possible, such as using Japanese honorifics and keeping the game's currency as yen rather than changing it. As a general rule, they incorporate cultural elements from the original versions unless they would not be understood by the player, such as with certain jokes. Nevertheless, some changes had to be made. In one instance, the character Mitsuru Kirijo was originally an English speaker, but her second language for the localized version was changed to French due to her cultured appearance. School tests also needed to be changed due to similar language-based issues. The Social Links were originally called "Community",[Jp. 11] but this was changed as the word "Community" had a very specific meaning in English. The new name was inspired by the way the character Igor made reference to the concept using words such as "society" and "bonds". Some in-game Easter egg references were also changed: in Persona 3 references to the larger Megami Tensei series by a character in an in-game MMORPG were changed to reference earlier Persona games, while mentions of a fictional detective in Persona 4 were altered to reference the Kuzunoha family from Eternal Punishment and the Devil Summoner series. Character names have also needed adjustment, such as the stage name of Persona 4 character Rise Kujikawa, and the way characters referred to each other was adjusted to appeal more to a western audience. Persona 5 was also localized in this fashion. The localized English names of games have also been altered. The banner title for Persona was changed from Megami Ibunroku to Revelations, principally because the team thought the latter name sounded "cool". The Revelations title was removed for Innocent Sin and Eternal Punishment. After the successful release of Nocturne, the "Shin Megami Tensei" moniker was added to the series title to help with Western marketing. This has not been the case for some games: Persona 4 Arena's original title, Persona 4: The Ultimate in Mayonaka Arena, was shortened as it sounded "awkward", and the "Shin Megami Tensei" moniker was dropped as it would have made the title too long, which has been applied to every game in the series since. The same change was made for Persona 4 Golden and Persona 5 Royal, with the team dropping "The" that was in the Japanese title because it would have sounded "odd" in English-speaking regions. Reception The first Persona was referred to at the time as a sleeper hit, and the success of it and Eternal Punishment helped establish both Atlus and the Megami Tensei franchise in North America. In Europe, the series did not become established prior to the release of Persona 3 and 4, both of which were highly successful in the region. According to Atlus CEO Naoto Hiraoka, the main turning point for the franchise was the release of Persona 3, which was a commercial success and brought the series to the attention of the mainstream gaming community. Persona 4 received an even better reception. The Persona series' success has allowed Atlus to build a strong player base outside of Japan, contributing to the success of other games such as Catherine.[c] The Persona series has been referred to as the most popular spin-off from the Megami Tensei franchise, gaining notoriety and success in its own right. James Whitbrook of Io9 commented that while "here in the west, we've got plenty of awesome urban fantasy, especially from a YA perspective. But what makes Persona interesting is that it's the familiar concept of urban fantasy, the balance of the mundane 'normal' life of the protagonists and the problems they have there with the fantastical nature of the supernatural world that lies beneath all that, from a Japanese perspective. Over here, that's much less common, and the way the series portrays urban fantasy through that lens is what makes it so different, especially from what you would normally expect from Japanese RPGs." Nintendo Power, in an article concerning the Megami Tensei series, cited the Persona series' "modern-day horror stories" and "teams of Japanese high-school kids" as the perfect example of the franchise. Persona was mentioned in 1999 by GameSpot's Andrew Vestal as a game that deserved attention despite not aging well, saying "Examining Persona reveals three of the traits that make the series so popular – and unique – amongst RPG fans: demonology, negotiation, and psychology". The game has been named as a cult classic. Persona 3 was named by RPGamer as the greatest RPG of the past decade in 2009, and RPGFanlisted Persona 3 and 4 in second and fourth place respectively in their similar 2011 list. Persona 3 was listed by Gamasutra as one of the 20 essential RPGs for players of the genre. Persona 4 was also listed by Famitsu as one of the greatest games of all time in a 2010 list. As well as gaining critical acclaim, the series has been the subject of controversy in the West over its content. The first instance of controversy surrounded the localized banner title of the original Persona, which raised concerns due to its religious implications. Kurt Katala, writing for 1UP.com in 2006 about the controversial content of the Megami Tensei franchise as a whole, mentioned Innocent Sin's references to homosexuality, schoolyard violence, and Nazism, considering them possible reasons why the game had yet to be released outside of Japan. In 1UP.com's 2007 game awards, which ran in the March 2008 issue of Electronic Gaming Monthly, Persona 3 was given the "Most controversial game that created no controversy" award: the writers said "Rockstar's Hot Coffee sex scandal and Bully's boy-on-boy kissing's got nothing on this PS2 role-player's suicide-initiated battles or subplot involving student–teacher dating". Persona 4 has in turn been examined by multiple sites over its portrayal of character sexuality and gender identity. Particular controversy surrounds the three mainline titles with Katsura Hashino at the head—Persona 3, Persona 4, and Persona 5—as players and journalists observed notable distasteful depictions of homosexuality within the trio of games. Persona 3 features a non-optional interaction with a female-presenting NPC that attempts to flirt with the game's male main characters, before being observed to have a small amount of facial hair, leading the repulsed characters and the player to believe that this NPC is a transgender woman. Persona 5 also has a string of comedic non-optional interactions with two seemingly gay men which were the subject of extensive criticism, and were then altered in the Western localization of Persona 5 Royal. However, the scenes are still present and the issues audiences had were not fully alleviated. Persona 4 has been subject to various criticisms for its depictions of potentially queer characters. Major character Kanji Tatsumi is seen as potentially being bisexual or gay, leading to him being made the brunt of many homophobic jokes and jabs from another main character, Yosuke Hanamura. The writing of Naoto Shirogane in regards to her relationship with gender identity and her character arc which involves embracing her biological gender of female after previously identifying as male due to concerns surrounding treatment of women in the police force. By November 2024, the series had sold over 23 million copies worldwide. Related media The first anime adaptation of the Persona series, a television series based on Persona 3 titled Persona: Trinity Soul, aired in 2008. Trinity Soul takes place in an alternate setting ten years after Persona 3, making it a non-canon entry in the franchise. It was animated by A-1 Pictures, directed by Jun Matsumoto, written by a team that included Yasuyuki Muto, Shogo Yasukawa, and Shinsuke Onishi, and composed for by Taku Iwasaki. Its characters were designed by Soejima and Yuriko Ishii, while Persona designs were done by Nobuhiko Genma. It was distributed internationally by NIS America. An anime adaptation of the original Persona 4, Persona 4: The Animation, aired in 2011. The 25-episode series was produced by AIC ASTA and directed by Seiji Kishi. In 2014, a series based on Persona 4 Golden, titled Persona 4: The Golden Animation, was produced by A-1 Pictures. This series, which retains the cast of the original adaptation, dramatizes the new material included in Persona 4 Golden, focusing on the protagonist's encounters with new character Marie. A standalone prequel anime created by A-1 Pictures, Persona 5 The Animation: The Day Breakers, was released in September 2016 prior to the Japanese release of the game. A full anime series based on Persona 5, Persona 5: The Animation, aired in 2018. The original Persona 4 anime series was made into a condensed film adaptation titled Persona 4: The Animation - The Factor of Hope; it was released in Japanese cinemas in 2012. Persona 3 has also been adapted into a series of anime films produced by AIC ASTA and featuring staff from Persona 4: The Animation, released in cinemas in Japan and licensed for release overseas by Aniplex. The four films are titled #1 Spring of Birth, #2 Midsummer Knight's Dream, #3 Falling Down, and #4 Winter of Rebirth. They were released from 2013 to 2016. For both Persona 4: The Animation and the Persona 3 film series, one of the main concerns was the portrayal of the lead characters, which were originally dictated by player actions. Persona was adapted into an eight-issue manga series titled Megami Ibunroku Persona, originally serialized in 1996 and later reissued in 2009. A second spin-off manga, Persona: Tsumi to Batsu,[Jp. 12] was released to tie in with the release of the Persona 2 games. Set within the same setting of the Persona 2 games, it follows a separate story. In its 2011 reissue, new material was added that connected the manga to the events of Innocent Sin. Persona 3, Persona 4, and Persona 5 have all received their own manga adaptations. Another manga based on Persona Q was also serialized: two separate manga storylines, based on the two storylines featured in the game, were written and dubbed Side:P3 and Side:P4. Multiple novels based on Persona 3 and 4 have also been released. Six stage plays based on Persona 3 have been produced under the banner Persona 3: The Weird Masquerade. They received limited runs and featured separate performances for the male and female versions of the game's protagonists. Persona 4 was also adapted into two stage plays, both produced by Marvelous AQL and receiving limited runs in 2012: Visualive and Visualive the Evolution. A stage play based on Persona 4 Arena was likewise given a limited run in December 2014, and one based on Persona 4 Arena Ultimax ran in July 2016. Beginning in 2019, a series of four plays based on Persona 5 were produced, titled Persona 5 The Stage. They received DVD releases and later were released on Crunchyroll. Persona 3 reload is currently getting its own stage play, the Persona 3 Lunation The Act. The first act of the stage play runs from July 6th to July 13th, 2025, at the Theatre G-Rosso. This stage play will only include the male protagonist, who will be played by Mizuki Umetsu. It will be directed by Makoto Kimura; a DVD has been released of the stage play; it's currently unknown if it will be translated. Atlus has created or hosted media dedicated to the Persona series. A dedicated magazine originally ran for ten issues between 2011 and 2012, and has been irregularly revived since then. An official talk show released on the official Persona website and Niconico, Persona Stalker Club, began in February 2014. Hosted by freelance writer Mafia Kajita and actress Tomomi Isomura, it was designed to deepen the connection between Atlus and the Persona fanbase. Concerts featuring music from the Persona series have also been performed, and some have received commercial releases on home media in Japan. Action figures and merchandise such as clothing have also been produced. The series was also represented in the 2018 crossover fighting game Super Smash Bros. Ultimate with the April 2019 downloadable content (DLC) inclusion of Joker, the protagonist of Persona 5. Along with him, a Persona-themed stage, eleven musical tracks from the series, and Mii costumes of Morgana, Teddie, and the main protagonists from Persona 3 and 4 were also featured. In June 2022, as part of the series' 25th anniversary, Sega expressed in an interview with IGN, their desire to expand the Persona series and other Atlus properties into live-action film and television, as had been done with their flagship property, Sonic the Hedgehog and its 2020 film adaptation. Toru Nakahara, Sega's lead producer on the Sonic the Hedgehog films and the Netflix animated series Sonic Prime, stated of Atlus' games that, "Stories like those from the Persona franchise really resonate with our fans and we see an opportunity to expand the lore like no one has seen — or played — before". Joker from Persona 5 makes a cameo in Sega's theatrical production logo, which debuted in the aforementioned Sonic the Hedgehog film adaptation as the star Paramount's Cinematic Universe. In 2023, actors Jun Shison and Haruna Kawaguchi were appointed as official ambassadors for the Persona series, where they would appear in commercials and other promotional campaigns for the series. See also Footnotes External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Black_hole#Golden_age] | [TOKENS: 13839]
Contents Black hole A black hole is an astronomical body so compact that its gravity prevents anything, including light, from escaping. Albert Einstein's theory of general relativity predicts that a sufficiently compact mass will form a black hole. The boundary of no escape is called the event horizon. In general relativity, a black hole's event horizon seals an object's fate but produces no locally detectable change when crossed. General relativity also predicts that every black hole should have a central singularity, where the curvature of spacetime is infinite. In many ways, a black hole acts like an ideal black body, as it reflects no light. Quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is of the order of billionths of a kelvin for stellar black holes, making it essentially impossible to observe directly. Objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. In 1916, Karl Schwarzschild found the first modern solution of general relativity that would characterise a black hole. Due to his influential research, the Schwarzschild metric is named after him. David Finkelstein, in 1958, first interpreted Schwarzschild's model as a region of space from which nothing can escape. Black holes were long considered a mathematical curiosity; it was not until the 1960s that theoretical work showed they were a generic prediction of general relativity. The first black hole known was Cygnus X-1, identified by several researchers independently in 1971. Black holes typically form when massive stars collapse at the end of their life cycle. After a black hole has formed, it can grow by absorbing mass from its surroundings. Supermassive black holes of millions of solar masses may form by absorbing other stars and merging with other black holes, or via direct collapse of gas clouds. There is consensus that supermassive black holes exist in the centres of most galaxies. The presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as visible light. Matter falling toward a black hole can form an accretion disk of infalling plasma, heated by friction and emitting light. In extreme cases, this creates a quasar, some of the brightest objects in the universe. Merging black holes can also be detected by observation of the gravitational waves they emit. If other stars are orbiting a black hole, their orbits can be used to determine the black hole's mass and location. Such observations can be used to exclude possible alternatives such as neutron stars. In this way, astronomers have identified numerous stellar black hole candidates in binary systems and established that the radio source known as Sagittarius A*, at the core of the Milky Way galaxy, contains a supermassive black hole of about 4.3 million solar masses. History The idea of a body so massive that even light could not escape was first proposed in the late 18th century by English astronomer and clergyman John Michell and independently by French scientist Pierre-Simon Laplace. Both scholars proposed very large stars in contrast to the modern concept of an extremely dense object. Michell's idea, in a short part of a letter published in 1784, calculated that a star with the same density but 500 times the radius of the sun would not let any emitted light escape; the surface escape velocity would exceed the speed of light.: 122 Michell correctly hypothesized that such supermassive but non-radiating bodies might be detectable through their gravitational effects on nearby visible bodies. In 1796, Laplace mentioned that a star could be invisible if it were sufficiently large while speculating on the origin of the Solar System in his book Exposition du Système du Monde. Franz Xaver von Zach asked Laplace for a mathematical analysis, which Laplace provided and published in a journal edited by von Zach. In 1905, Albert Einstein showed that the laws of electromagnetism would be invariant under a Lorentz transformation: they would be identical for observers travelling at different velocities relative to each other. This discovery became known as the principle of special relativity. Although the laws of mechanics had already been shown to be invariant, gravity remained yet to be included.: 19 In 1907, Einstein published a paper proposing his equivalence principle, the hypothesis that inertial mass and gravitational mass have a common cause. Using the principle, Einstein predicted the redshift and half of the lensing effect of gravity on light; the full prediction of gravitational lensing required development of general relativity.: 19 By 1915, Einstein refined these ideas into his general theory of relativity, which explained how matter affects spacetime, which in turn affects the motion of other matter. This formed the basis for black hole physics. Only a few months after Einstein published the field equations describing general relativity, astrophysicist Karl Schwarzschild set out to apply the idea to stars. He assumed spherical symmetry with no spin and found a solution to Einstein's equations.: 124 A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the same solution. At a certain radius from the center of the mass, the Schwarzschild solution became singular, meaning that some of the terms in the Einstein equations became infinite. The nature of this radius, which later became known as the Schwarzschild radius, was not understood at the time. Many physicists of the early 20th century were skeptical of the existence of black holes. In a 1926 popular science book, Arthur Eddington critiqued the idea of a star with mass compressed to its Schwarzschild radius as a flaw in the then-poorly-understood theory of general relativity.: 134 In 1939, Einstein himself used his theory of general relativity in an attempt to prove that black holes were impossible. His work relied on increasing pressure or increasing centrifugal force balancing the force of gravity so that the object would not collapse beyond its Schwarzschild radius. He missed the possibility that implosion would drive the system below this critical value.: 135 By the 1920s, astronomers had classified a number of white dwarf stars as too cool and dense to be explained by the gradual cooling of ordinary stars. In 1926, Ralph Fowler showed that quantum-mechanical degeneracy pressure was larger than thermal pressure at these densities.: 145 In 1931, Subrahmanyan Chandrasekhar calculated that a non-rotating body of electron-degenerate matter below a certain limiting mass is stable, and by 1934 he showed that this explained the catalog of white dwarf stars.: 151 When Chandrasekhar announced his results, Eddington pointed out that stars above this limit would radiate until they were sufficiently dense to prevent light from exiting, a conclusion he considered absurd. Eddington and, later, Lev Landau argued that some yet unknown mechanism would stop the collapse. In the 1930s, Fritz Zwicky and Walter Baade studied stellar novae, focusing on exceptionally bright ones they called supernovae. Zwicky promoted the idea that supernovae produced stars with the density of atomic nuclei—neutron stars—but this idea was largely ignored.: 171 In 1939, based on Chandrasekhar's reasoning, J. Robert Oppenheimer and George Volkoff predicted that neutron stars below a certain mass limit, later called the Tolman–Oppenheimer–Volkoff limit, would be stable due to neutron degeneracy pressure. Above that limit, they reasoned that either their model would not apply or that gravitational contraction would not stop.: 380 John Archibald Wheeler and two of his students resolved questions about the model behind the Tolman–Oppenheimer–Volkoff (TOV) limit. Harrison and Wheeler developed the equations of state relating density to pressure for cold matter all the way through electron degeneracy and neutron degeneracy. Masami Wakano and Wheeler then used the equations to compute the equilibrium curve for stars, relating mass to circumference. They found no additional features that would invalidate the TOV limit. This meant that the only thing that could prevent black holes from forming was a dynamic process ejecting sufficient mass from a star as it cooled.: 205 The modern concept of black holes was formulated by Robert Oppenheimer and his student Hartland Snyder in 1939.: 80 In the paper, Oppenheimer and Snyder solved Einstein's equations of general relativity for an idealized imploding star, in a model later called the Oppenheimer–Snyder model, then described the results from far outside the star. The implosion starts as one might expect: the star material rapidly collapses inward. However, as the density of the star increases, gravitational time dilation increases and the collapse, viewed from afar, seems to slow down further and further until the star reaches its Schwarzschild radius, where it appears frozen in time.: 217 In 1958, David Finkelstein identified the Schwarzschild surface as an event horizon, calling it "a perfect unidirectional membrane: causal influences can cross it in only one direction". In this sense, events that occur inside of the black hole cannot affect events that occur outside of the black hole. Finkelstein created a new reference frame to include the point of view of infalling observers.: 103 Finkelstein's new frame of reference allowed events at the surface of an imploding star to be related to events far away. By 1962 the two points of view were reconciled, convincing many skeptics that implosion into a black hole made physical sense.: 226 The era from the mid-1960s to the mid-1970s was the "golden age of black hole research", when general relativity and black holes became mainstream subjects of research.: 258 In this period, more general black hole solutions were found. In 1963, Roy Kerr found the exact solution for a rotating black hole. Two years later, Ezra Newman found the cylindrically symmetric solution for a black hole that is both rotating and electrically charged. In 1967, Werner Israel found that the Schwarzschild solution was the only possible solution for a nonspinning, uncharged black hole, meaning that a Schwarzschild black hole would be defined by its mass alone. Similar identities were later found for Reissner-Nordstrom and Kerr black holes, defined only by their mass and their charge or spin respectively. Together, these findings became known as the no-hair theorem, which states that a stationary black hole is completely described by the three parameters of the Kerr–Newman metric: mass, angular momentum, and electric charge. At first, it was suspected that the strange mathematical singularities found in each of the black hole solutions only appeared due to the assumption that a black hole would be perfectly spherically symmetric, and therefore the singularities would not appear in generic situations where black holes would not necessarily be symmetric. This view was held in particular by Vladimir Belinski, Isaak Khalatnikov, and Evgeny Lifshitz, who tried to prove that no singularities appear in generic solutions, although they would later reverse their positions. However, in 1965, Roger Penrose proved that general relativity without quantum mechanics requires that singularities appear in all black holes. Astronomical observations also made great strides during this era. In 1967, Antony Hewish and Jocelyn Bell Burnell discovered pulsars and by 1969, these were shown to be rapidly rotating neutron stars. Until that time, neutron stars, like black holes, were regarded as just theoretical curiosities, but the discovery of pulsars showed their physical relevance and spurred a further interest in all types of compact objects that might be formed by gravitational collapse. Based on observations in Greenwich and Toronto in the early 1970s, Cygnus X-1, a galactic X-ray source discovered in 1964, became the first astronomical object commonly accepted to be a black hole. Work by James Bardeen, Jacob Bekenstein, Carter, and Hawking in the early 1970s led to the formulation of black hole thermodynamics. These laws describe the behaviour of a black hole in close analogy to the laws of thermodynamics by relating mass to energy, area to entropy, and surface gravity to temperature. The analogy was completed: 442 when Hawking, in 1974, showed that quantum field theory implies that black holes should radiate like a black body with a temperature proportional to the surface gravity of the black hole, predicting the effect now known as Hawking radiation. While Cygnus X-1, a stellar-mass black hole, was generally accepted by the scientific community as a black hole by the end of 1973, it would be decades before a supermassive black hole would gain the same broad recognition. Although, as early as the 1960s, physicists such as Donald Lynden-Bell and Martin Rees had suggested that powerful quasars in the center of galaxies were powered by accreting supermassive black holes, little observational proof existed at the time. However, the Hubble Space Telescope, launched decades later, found that supermassive black holes were not only present in these active galactic nuclei, but that supermassive black holes in the center of galaxies were ubiquitous: Almost every galaxy had a supermassive black hole at its center, many of which were quiescent. In 1999, David Merritt proposed the M–sigma relation, which related the dispersion of the velocity of matter in the center bulge of a galaxy to the mass of the supermassive black hole at its core. Subsequent studies confirmed this correlation. Around the same time, based on telescope observations of the velocities of stars at the center of the Milky Way galaxy, independent work groups led by Andrea Ghez and Reinhard Genzel concluded that the compact radio source in the center of the galaxy, Sagittarius A*, was likely a supermassive black hole. On 11 February 2016, the LIGO Scientific Collaboration and Virgo Collaboration announced the first direct detection of gravitational waves, named GW150914, representing the first observation of a black hole merger. At the time of the merger, the black holes were approximately 1.4 billion light-years away from Earth and had masses of 30 and 35 solar masses.: 6 In 2017, Rainer Weiss, Kip Thorne, and Barry Barish, who had spearheaded the project, were awarded the Nobel Prize in Physics for their work. Since the initial discovery in 2015, hundreds more gravitational waves have been observed by LIGO and another interferometer, Virgo. On 10 April 2019, the first direct image of a black hole and its vicinity was published, following observations made by the Event Horizon Telescope (EHT) in 2017 of the supermassive black hole in Messier 87's galactic centre. In 2022, the Event Horizon Telescope collaboration released an image of the black hole in the center of the Milky Way galaxy, Sagittarius A*; The data had been collected in 2017. In 2020, the Nobel Prize in Physics was awarded for work on black holes. Andrea Ghez and Reinhard Genzel shared one-half for their discovery that Sagittarius A* is a supermassive black hole. Penrose received the other half for his work showing that the mathematics of general relativity requires the formation of black holes. Cosmologists lamented that Hawking's extensive theoretical work on black holes would not be honored since he died in 2018. In December 1967, a student reportedly suggested the phrase black hole at a lecture by John Wheeler; Wheeler adopted the term for its brevity and "advertising value", and Wheeler's stature in the field ensured it quickly caught on, leading some to credit Wheeler with coining the phrase. However, the term was used by others around that time. Science writer Marcia Bartusiak traces the term black hole to physicist Robert H. Dicke, who in the early 1960s reportedly compared the phenomenon to the Black Hole of Calcutta, notorious as a prison where people entered but never left alive. The term was used in print by Life and Science News magazines in 1963, and by science journalist Ann Ewing in her article "'Black Holes' in Space", dated 18 January 1964, which was a report on a meeting of the American Association for the Advancement of Science held in Cleveland, Ohio. Definition A black hole is generally defined as a region of spacetime from which no information-carrying signals or objects can escape. However, verifying an object as a black hole by this definition would require waiting for an infinite time and at an infinite distance from the black hole to verify that indeed, nothing has escaped, and thus cannot be used to identify a physical black hole. Broadly, physicists do not have a precisely-agreed-upon definition of a black hole. Among astrophysicists, a black hole is a compact object with a mass larger than four solar masses. A black hole may also be defined as a reservoir of information: 142 or a region where space is falling inwards faster than the speed of light. Properties The no-hair theorem postulates that, once it achieves a stable condition after formation, a black hole has only three independent physical properties: mass, electric charge, and angular momentum; the black hole is otherwise featureless. If the conjecture is true, any two black holes that share the same values for these properties, or parameters, are indistinguishable from one another. The degree to which the conjecture is true for real black holes is currently an unsolved problem. The simplest static black holes have mass but neither electric charge nor angular momentum. According to Birkhoff's theorem, these Schwarzschild black holes are the only vacuum solution that is spherically symmetric. Solutions describing more general black holes also exist. Non-rotating charged black holes are described by the Reissner–Nordström metric, while the Kerr metric describes a non-charged rotating black hole. The most general stationary black hole solution known is the Kerr–Newman metric, which describes a black hole with both charge and angular momentum. The simplest static black holes have mass but neither electric charge nor angular momentum. Contrary to the popular notion of a black hole "sucking in everything" in its surroundings, from far away, the external gravitational field of a black hole is identical to that of any other body of the same mass. While a black hole can theoretically have any positive mass, the charge and angular momentum are constrained by the mass. The total electric charge Q and the total angular momentum J are expected to satisfy the inequality Q 2 4 π ϵ 0 + c 2 J 2 G M 2 ≤ G M 2 {\displaystyle {\frac {Q^{2}}{4\pi \epsilon _{0}}}+{\frac {c^{2}J^{2}}{GM^{2}}}\leq GM^{2}} for a black hole of mass M. Black holes with the maximum possible charge or spin satisfying this inequality are called extremal black holes. Solutions of Einstein's equations that violate this inequality exist, but they do not possess an event horizon. These are so-called naked singularities that can be observed from the outside. Because these singularities make the universe inherently unpredictable, many physicists believe they could not exist. The weak cosmic censorship hypothesis, proposed by Sir Roger Penrose, rules out the formation of such singularities, when they are created through the gravitational collapse of realistic matter. However, this theory has not yet been proven, and some physicists believe that naked singularities could exist. It is also unknown whether black holes could even become extremal, forming naked singularities, since natural processes counteract increasing spin and charge when a black hole becomes near-extremal. The total mass of a black hole can be estimated by analyzing the motion of objects near the black hole, such as stars or gas. All black holes spin, often fast—One supermassive black hole, GRS 1915+105 has been estimated to spin at over 1,000 revolutions per second. The Milky Way's central black hole Sagittarius A* rotates at about 90% of the maximum rate. The spin rate can be inferred from measurements of atomic spectral lines in the X-ray range. As gas near the black hole plunges inward, high energy X-ray emission from electron-positron pairs illuminates the gas further out, appearing red-shifted due to relativistic effects. Depending on the spin of the black hole, this plunge happens at different radii from the hole, with different degrees of redshift. Astronomers can use the gap between the x-ray emission of the outer disk and the redshifted emission from plunging material to determine the spin of the black hole. A newer way to estimate spin is based on the temperature of gasses accreting onto the black hole. The method requires an independent measurement of the black hole mass and inclination angle of the accretion disk followed by computer modeling. Gravitational waves from coalescing binary black holes can also provide the spin of both progenitor black holes and the merged hole, but such events are rare. A spinning black hole has angular momentum. The supermassive black hole in the center of the Messier 87 (M87) galaxy appears to have an angular momentum very close to the maximum theoretical value. That uncharged limit is J ≤ G M 2 c , {\displaystyle J\leq {\frac {GM^{2}}{c}},} allowing definition of a dimensionless spin magnitude such that 0 ≤ c J G M 2 ≤ 1. {\displaystyle 0\leq {\frac {cJ}{GM^{2}}}\leq 1.} Most black holes are believed to have an approximately neutral charge. For example, Michal Zajaček, Arman Tursunov, Andreas Eckart, and Silke Britzen found the electric charge of Sagittarius A* to be at least ten orders of magnitude below the theoretical maximum. A charged black hole repels other like charges just like any other charged object. If a black hole were to become charged, particles with an opposite sign of charge would be pulled in by the extra electromagnetic force, while particles with the same sign of charge would be repelled, neutralizing the black hole. This effect may not be as strong if the black hole is also spinning. The presence of charge can reduce the diameter of the black hole by up to 38%. The charge Q for a nonspinning black hole is bounded by Q ≤ G M , {\displaystyle Q\leq {\sqrt {G}}M,} where G is the gravitational constant and M is the black hole's mass. Classification Black holes can have a wide range of masses. The minimum mass of a black hole formed by stellar gravitational collapse is governed by the maximum mass of a neutron star and is believed to be approximately two-to-four solar masses. However, theoretical primordial black holes, believed to have formed soon after the Big Bang, could be far smaller, with masses as little as 10−5 grams at formation. These very small black holes are sometimes called micro black holes. Black holes formed by stellar collapse are called stellar black holes. Estimates of their maximum mass at formation vary, but generally range from 10 to 100 solar masses, with higher estimates for black holes progenated by low-metallicity stars. The mass of a black hole formed via a supernova has a lower bound: If the progenitor star is too small, the collapse may be stopped by the degeneracy pressure of the star's constituents, allowing the condensation of matter into an exotic denser state. Degeneracy pressure occurs from the Pauli exclusion principle—Particles will resist being in the same place as each other. Smaller progenitor stars, with masses less than about 8 M☉, will be held together by the degeneracy pressure of electrons and will become a white dwarf. For more massive progenitor stars, electron degeneracy pressure is no longer strong enough to resist the force of gravity and the star will be held together by neutron degeneracy pressure, which can occur at much higher densities, forming a neutron star. If the star is still too massive, even neutron degeneracy pressure will not be able to resist the force of gravity and the star will collapse into a black hole.: 5.8 Stellar black holes can also gain mass via accretion of nearby matter, often from a companion object such as a star. Black holes that are larger than stellar black holes but smaller than supermassive black holes are called intermediate-mass black holes, with masses of approximately 102 to 105 solar masses. These black holes seem to be rarer than their stellar and supermassive counterparts, with relatively few candidates having been observed. Physicists have speculated that such black holes may form from collisions in globular and star clusters or at the center of low-mass galaxies. They may also form as the result of mergers of smaller black holes, with several LIGO observations finding merged black holes within the 110-350 solar mass range. The black holes with the largest masses are called supermassive black holes, with masses more than 106 times that of the Sun. These black holes are believed to exist at the centers of almost every large galaxy, including the Milky Way. Some scientists have proposed a subcategory of even larger black holes, called ultramassive black holes, with masses greater than 109-1010 solar masses. Theoretical models predict that the accretion disc that feeds black holes will be unstable once a black hole reaches 50-100 billion times the mass of the Sun, setting a rough upper limit to black hole mass. Structure While black holes are conceptually invisible sinks of all matter and light, in astronomical settings, their enormous gravity alters the motion of surrounding objects and pulls nearby gas inwards at near-light speed, making the area around black holes the brightest objects in the universe. Some black holes have relativistic jets—thin streams of plasma travelling away from the black hole at more than one-tenth of the speed of light. A small faction of the matter falling towards the black hole gets accelerated away along the hole rotation axis. These jets can extend as far as millions of parsecs from the black hole itself. Black holes of any mass can have jets. However, they are typically observed around spinning black holes with strongly-magnetized accretion disks. Relativistic jets were more common in the early universe, when galaxies and their corresponding supermassive black holes were rapidly gaining mass. All black holes with jets also have an accretion disk, but the jets are usually brighter than the disk. Quasars, typically found in other galaxies, are believed to be supermassive black holes with jets; microquasars are believed to be stellar-mass objects with jets, typically observed in the Milky Way. The mechanism of formation of jets is not yet known, but several options have been proposed. One method proposed to fuel these jets is the Blandford-Znajek process, which suggests that the dragging of magnetic field lines by a black hole's rotation could launch jets of matter into space. The Penrose process, which involves extraction of a black hole's rotational energy, has also been proposed as a potential mechanism of jet propulsion. Due to conservation of angular momentum, gas falling into the gravitational well created by a massive object will typically form a disk-like structure around the object.: 242 As the disk's angular momentum is transferred outward due to internal processes, its matter falls farther inward, converting its gravitational energy into heat and releasing a large flux of x-rays. The temperature of these disks can range from thousands to millions of Kelvin, and temperatures can differ throughout a single accretion disk. Accretion disks can also emit in other parts of the electromagnetic spectrum, depending on the disk's turbulence and magnetization and the black hole's mass and angular momentum. Accretion disks can be defined as geometrically thin or geometrically thick. Geometrically thin disks are mostly confined to the black hole's equatorial plane and have a well-defined edge at the innermost stable circular orbit (ISCO), while geometrically thick disks are supported by internal pressure and temperature and can extend inside the ISCO. Disks with high rates of electron scattering and absorption, appearing bright and opaque, are called optically thick; optically thin disks are more translucent and produce fainter images when viewed from afar. Accretion disks of black holes accreting beyond the Eddington limit are often referred to as polish donuts due to their thick, toroidal shape that resembles that of a donut. Quasar accretion disks are expected to usually appear blue in color. The disk for a stellar black hole, on the other hand, would likely look orange, yellow, or red, with its inner regions being the brightest. Theoretical research suggests that the hotter a disk is, the bluer it should be, although this is not always supported by observations of real astronomical objects. Accretion disk colors may also be altered by the Doppler effect, with the part of the disk travelling towards an observer appearing bluer and brighter and the part of the disk travelling away from the observer appearing redder and dimmer. In Newtonian gravity, test particles can stably orbit at arbitrary distances from a central object. In general relativity, however, there exists a smallest possible radius for which a massive particle can orbit stably. Any infinitesimal inward perturbations to this orbit will lead to the particle spiraling into the black hole, and any outward perturbations will, depending on the energy, cause the particle to spiral in, move to a stable orbit further from the black hole, or escape to infinity. This orbit is called the innermost stable circular orbit, or ISCO. The location of the ISCO depends on the spin of the black hole and the spin of the particle itself. In the case of a Schwarzschild black hole (spin zero) and a particle without spin, the location of the ISCO is: r I S C O = 3 r s = 6 G M c 2 , {\displaystyle r_{\rm {ISCO}}=3\,r_{\text{s}}={\frac {6\,GM}{c^{2}}},} where r I S C O {\displaystyle r_{\rm {_{ISCO}}}} is the radius of the ISCO, r s {\displaystyle r_{\text{s}}} is the Schwarzschild radius of the black hole, G {\displaystyle G} is the gravitational constant, and c {\displaystyle c} is the speed of light. The radius of this orbit changes slightly based on particle spin. For charged black holes, the ISCO moves inwards. For spinning black holes, the ISCO is moved inwards for particles orbiting in the same direction that the black hole is spinning (prograde) and outwards for particles orbiting in the opposite direction (retrograde). For example, the ISCO for a particle orbiting retrograde can be as far out as about 9 r s {\displaystyle 9r_{\text{s}}} , while the ISCO for a particle orbiting prograde can be as close as at the event horizon itself. The photon sphere is a spherical boundary for which photons moving on tangents to that sphere are bent completely around the black hole, possibly orbiting multiple times. Light rays with impact parameters less than the radius of the photon sphere enter the black hole. For Schwarzschild black holes, the photon sphere has a radius 1.5 times the Schwarzschild radius; the radius for non-Schwarzschild black holes is at least 1.5 times the radius of the event horizon. When viewed from a great distance, the photon sphere creates an observable black hole shadow. Since no light emerges from within the black hole, this shadow is the limit for possible observations.: 152 The shadow of colliding black holes should have characteristic warped shapes, allowing scientists to detect black holes that are about to merge. While light can still escape from the photon sphere, any light that crosses the photon sphere on an inbound trajectory will be captured by the black hole. Therefore, any light that reaches an outside observer from the photon sphere must have been emitted by objects between the photon sphere and the event horizon. Light emitted towards the photon sphere may also curve around the black hole and return to the emitter. For a rotating, uncharged black hole, the radius of the photon sphere depends on the spin parameter and whether the photon is orbiting prograde or retrograde. For a photon orbiting prograde, the photon sphere will be 1-3 Schwarzschild radii from the center of the black hole, while for a photon orbiting retrograde, the photon sphere will be between 3-5 Schwarzschild radii from the center of the black hole. The exact location of the photon sphere depends on the magnitude of the black hole's rotation. For a charged, nonrotating black hole, there will only be one photon sphere, and the radius of the photon sphere will decrease for increasing black hole charge. For non-extremal, charged, rotating black holes, there will always be two photon spheres, with the exact radii depending on the parameters of the black hole. Near a rotating black hole, spacetime rotates similar to a vortex. The rotating spacetime will drag any matter and light into rotation around the spinning black hole. This effect of general relativity, called frame dragging, gets stronger closer to the spinning mass. The region of spacetime in which it is impossible to stay still is called the ergosphere. The ergosphere of a black hole is a volume bounded by the black hole's event horizon and the ergosurface, which coincides with the event horizon at the poles but bulges out from it around the equator. Matter and radiation can escape from the ergosphere. Through the Penrose process, objects can emerge from the ergosphere with more energy than they entered with. The extra energy is taken from the rotational energy of the black hole, slowing down the rotation of the black hole.: 268 A variation of the Penrose process in the presence of strong magnetic fields, the Blandford–Znajek process, is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei. The observable region of spacetime around a black hole closest to its event horizon is called the plunging region. In this area it is no longer possible for free falling matter to follow circular orbits or stop a final descent into the black hole. Instead, it will rapidly plunge toward the black hole at close to the speed of light, growing increasingly hot and producing a characteristic, detectable thermal emission. However, light and radiation emitted from this region can still escape from the black hole's gravitational pull. For a nonspinning, uncharged black hole, the radius of the event horizon, or Schwarzschild radius, is proportional to the mass, M, through r s = 2 G M c 2 ≈ 2.95 M M ⊙ k m , {\displaystyle r_{\mathrm {s} }={\frac {2GM}{c^{2}}}\approx 2.95\,{\frac {M}{M_{\odot }}}~\mathrm {km,} } where rs is the Schwarzschild radius and M☉ is the mass of the Sun.: 124 For a black hole with nonzero spin or electric charge, the radius is smaller,[Note 1] until an extremal black hole could have an event horizon close to r + = G M c 2 , {\displaystyle r_{\mathrm {+} }={\frac {GM}{c^{2}}},} half the radius of a nonspinning, uncharged black hole of the same mass. Since the volume within the Schwarzschild radius increase with the cube of the radius, average density of a black hole inside its Schwarzschild radius is inversely proportional to the square of its mass: supermassive black holes are much less dense than stellar black holes. The average density of a 108 M☉ black hole is comparable to that of water. The defining feature of a black hole is the existence of an event horizon, a boundary in spacetime through which matter and light can pass only inward towards the center of the black hole. Nothing, not even light, can escape from inside the event horizon. The event horizon is referred to as such because if an event occurs within the boundary, information from that event cannot reach or affect an outside observer, making it impossible to determine whether such an event occurred.: 179 For non-rotating black holes, the geometry of the event horizon is precisely spherical, while for rotating black holes, the event horizon is oblate. To a distant observer, a clock near a black hole would appear to tick more slowly than one further from the black hole.: 217 This effect, known as gravitational time dilation, would also cause an object falling into a black hole to appear to slow as it approached the event horizon, never quite reaching the horizon from the perspective of an outside observer.: 218 All processes on this object would appear to slow down, and any light emitted by the object to appear redder and dimmer, an effect known as gravitational redshift. An object falling from half of a Schwarzschild radius above the event horizon would fade away until it could no longer be seen, disappearing from view within one hundredth of a second. It would also appear to flatten onto the black hole, joining all other material that had ever fallen into the hole. On the other hand, an observer falling into a black hole would not notice any of these effects as they cross the event horizon. Their own clocks appear to them to tick normally, and they cross the event horizon after a finite time without noting any singular behaviour. In general relativity, it is impossible to determine the location of the event horizon from local observations, due to Einstein's equivalence principle.: 222 Black holes that are rotating and/or charged have an inner horizon, often called the Cauchy horizon, inside of the black hole. The inner horizon is divided up into two segments: an ingoing section and an outgoing section. At the ingoing section of the Cauchy horizon, radiation and matter that fall into the black hole would build up at the horizon, causing the curvature of spacetime to go to infinity. This would cause an observer falling in to experience tidal forces. This phenomenon is often called mass inflation, since it is associated with a parameter dictating the black hole's internal mass growing exponentially, and the buildup of tidal forces is called the mass-inflation singularity or Cauchy horizon singularity. Some physicists have argued that in realistic black holes, accretion and Hawking radiation would stop mass inflation from occurring. At the outgoing section of the inner horizon, infalling radiation would backscatter off of the black hole's spacetime curvature and travel outward, building up at the outgoing Cauchy horizon. This would cause an infalling observer to experience a gravitational shock wave and tidal forces as the spacetime curvature at the horizon grew to infinity. This buildup of tidal forces is called the shock singularity. Both of these singularities are weak, meaning that an object crossing them would only be deformed a finite amount by tidal forces, even though the spacetime curvature would still be infinite at the singularity. This is as opposed to a strong singularity, where an object hitting the singularity would be stretched and squeezed by an infinite amount. They are also null singularities, meaning that a photon could travel parallel to the them without ever being intercepted. Ignoring quantum effects, every black hole has a singularity inside, points where the curvature of spacetime becomes infinite, and geodesics terminate within a finite proper time.: 205 For a non-rotating black hole, this region takes the shape of a single point; for a rotating black hole it is smeared out to form a ring singularity that lies in the plane of rotation.: 264 In both cases, the singular region has zero volume. All of the mass of the black hole ends up in the singularity.: 252 Since the singularity has nonzero mass in an infinitely small space, it can be thought of as having infinite density. Observers falling into a Schwarzschild black hole (i.e., non-rotating and not charged) cannot avoid being carried into the singularity once they cross the event horizon. As they fall further into the black hole, they will be torn apart by the growing tidal forces in a process sometimes referred to as spaghettification or the noodle effect. Eventually, they will reach the singularity and be crushed into an infinitely small point.: 182 However any perturbations, such as those caused by matter or radiation falling in, would cause space to oscillate chaotically near the singularity. Any matter falling in would experience intense tidal forces rapidly changing in direction, all while being compressed into an increasingly small volume. Alternative forms of general relativity, including addition of some quatum effects, can lead to regular, or nonsingular, black holes without singularities. For example, the fuzzball model, based on string theory, states that black holes are actually made up of quantum microstates and need not have a singularity or an event horizon. The theory of loop quantum gravity proposes that the curvature and density at the center of a black hole is large, but not infinite. Formation Black holes are formed by gravitational collapse of massive stars, either by direct collapse or during a supernova explosion in a process called fallback. Black holes can result from the merger of two neutron stars or a neutron star and a black hole. Other more speculative mechanisms include primordial black holes created from density fluctuations in the early universe, the collapse of dark stars, a hypothetical object powered by annihilation of dark matter, or from hypothetical self-interacting dark matter. Gravitational collapse occurs when an object's internal pressure is insufficient to resist the object's own gravity. At the end of a star's life, it will run out of hydrogen to fuse, and will start fusing more and more massive elements, until it gets to iron. Since the fusion of elements heavier than iron would require more energy than it would release, nuclear fusion ceases. If the iron core of the star is too massive, the star will no longer be able to support itself and will undergo gravitational collapse. While most of the energy released during gravitational collapse is emitted very quickly, an outside observer does not actually see the end of this process. Even though the collapse takes a finite amount of time from the reference frame of infalling matter, a distant observer would see the infalling material slow and halt just above the event horizon, due to gravitational time dilation. Light from the collapsing material takes longer and longer to reach the observer, with the delay growing to infinity as the emitting material reaches the event horizon. Thus the external observer never sees the formation of the event horizon; instead, the collapsing material seems to become dimmer and increasingly red-shifted, eventually fading away. Observations of quasars at redshift z ∼ 7 {\displaystyle z\sim 7} , less than a billion years after the Big Bang, has led to investigations of other ways to form black holes. The accretion process to build supermassive black holes has a limiting rate of mass accumulation and a billion years is not enough time to reach quasar status. One suggestion is direct collapse of nearly pure hydrogen gas (low metalicity) clouds characteristic of the young universe, forming a supermassive star which collapses into a black hole. It has been suggested that seed black holes with typical masses of ~105 M☉ could have formed in this way which then could grow to ~109 M☉. However, the very large amount of gas required for direct collapse is not typically stable to fragmentation to form multiple stars. Thus another approach suggests massive star formation followed by collisions that seed massive black holes which ultimately merge to create a quasar.: 85 A neutron star in a common envelope with a regular star can accrete sufficient material to collapse to a black hole or two neutron stars can merge. These avenues for the formation of black holes are considered relatively rare. In the current epoch of the universe, conditions needed to form black holes are rare and are mostly only found in stars. However, in the early universe, conditions may have allowed for black hole formations via other means. Fluctuations of spacetime soon after the Big Bang may have formed areas that were denser then their surroundings. Initially, these regions would not have been compact enough to form a black hole, but eventually, the curvature of spacetime in the regions become large enough to cause them to collapse into a black hole. Different models for the early universe vary widely in their predictions of the scale of these fluctuations. Various models predict the creation of primordial black holes ranging from a Planck mass (~2.2×10−8 kg) to hundreds of thousands of solar masses. Primordial black holes with masses less than 1015 g would have evaporated by now due to Hawking radiation. Despite the early universe being extremely dense, it did not re-collapse into a black hole during the Big Bang, since the universe was expanding rapidly and did not have the gravitational differential necessary for black hole formation. Models for the gravitational collapse of objects of relatively constant size, such as stars, do not necessarily apply in the same way to rapidly expanding space such as the Big Bang. In principle, black holes could be formed in high-energy particle collisions that achieve sufficient density, although no such events have been detected. These hypothetical micro black holes, which could form from the collision of cosmic rays and Earth's atmosphere or in particle accelerators like the Large Hadron Collider, would not be able to aggregate additional mass. Instead, they would evaporate in about 10−25 seconds, posing no threat to the Earth. Evolution Black holes can also merge with other objects such as stars or even other black holes. This is thought to have been important, especially in the early growth of supermassive black holes, which could have formed from the aggregation of many smaller objects. The process has also been proposed as the origin of some intermediate-mass black holes. Mergers of supermassive black holes may take a long time: As a binary of supermassive black holes approach each other, most nearby stars are ejected, leaving little for the remaining black holes to gravitationally interact with that would allow them to get closer to each other. This phenomenon has been called the final parsec problem, as the distance at which this happens is usually around one parsec. When a black hole accretes matter, the gas in the inner accretion disk orbits at very high speeds because of its proximity to the black hole. The resulting friction heats the inner disk to temperatures at which it emits vast amounts of electromagnetic radiation (mainly X-rays) detectable by telescopes. By the time the matter of the disk reaches the ISCO, between 5.7% and 42% of its mass will have been converted to energy, depending on the black hole's spin. About 90% of this energy is released within about 20 black hole radii. In many cases, accretion disks are accompanied by relativistic jets that are emitted along the black hole's poles, which carry away much of the energy. The mechanism for the creation of these jets is currently not well understood, in part due to insufficient data. Many of the universe's most energetic phenomena have been attributed to the accretion of matter on black holes. Active galactic nuclei and quasars are believed to be the accretion disks of supermassive black holes. X-ray binaries are generally accepted to be binary systems in which one of the two objects is a compact object accreting matter from its companion. Ultraluminous X-ray sources may be the accretion disks of intermediate-mass black holes. At a certain rate of accretion, the outward radiation pressure will become as strong as the inward gravitational force, and the black hole should unable to accrete any faster. This limit is called the Eddington limit. However, many black holes accrete beyond this rate due to their non-spherical geometry or instabilities in the accretion disk. Accretion beyond the limit is called Super-Eddington accretion and may have been commonplace in the early universe. Stars have been observed to get torn apart by tidal forces in the immediate vicinity of supermassive black holes in galaxy nuclei, in what is known as a tidal disruption event (TDE). Some of the material from the disrupted star forms an accretion disk around the black hole, which emits observable electromagnetic radiation. The correlation between the masses of supermassive black holes in the centres of galaxies with the velocity dispersion and mass of stars in their host bulges suggests that the formation of galaxies and the formation of their central black holes are related. Black hole winds from rapid accretion, particularly when the galaxy itself is still accreting matter, can compress gas nearby, accelerating star formation. However, if the winds become too strong, the black hole may blow nearly all of the gas out of the galaxy, quenching star formation. Black hole jets may also energize nearby cavities of plasma and eject low-entropy gas from out of the galactic core, causing gas in galactic centers to be hotter than expected. If Hawking's theory of black hole radiation is correct, then black holes are expected to shrink and evaporate over time as they lose mass by the emission of photons and other particles. The temperature of this thermal spectrum (Hawking temperature) is proportional to the surface gravity of the black hole, which is inversely proportional to the mass. Hence, large black holes emit less radiation than small black holes.: Ch. 9.6 A stellar black hole of 1 M☉ has a Hawking temperature of 62 nanokelvins. This is far less than the 2.7 K temperature of the cosmic microwave background radiation. Stellar-mass or larger black holes receive more mass from the cosmic microwave background than they emit through Hawking radiation and thus will grow instead of shrinking. To have a Hawking temperature larger than 2.7 K (and be able to evaporate), a black hole would need a mass less than the Moon. Such a black hole would have a diameter of less than a tenth of a millimetre. The Hawking radiation for an astrophysical black hole is predicted to be very weak and would thus be exceedingly difficult to detect from Earth. A possible exception is the burst of gamma rays emitted in the last stage of the evaporation of primordial black holes. Searches for such flashes have proven unsuccessful and provide stringent limits on the possibility of existence of low mass primordial black holes, with modern research predicting that primordial black holes must make up less than a fraction of 10−7 of the universe's total mass. NASA's Fermi Gamma-ray Space Telescope, launched in 2008, has searched for these flashes, but has not yet found any. The properties of a black hole are constrained and interrelated by the theories that predict these properties. When based on general relativity, these relationships are called the laws of black hole mechanics. For a black hole that is not still forming or accreting matter, the zeroth law of black hole mechanics states the black hole's surface gravity is constant across the event horizon. The first law relates changes in the black hole's surface area, angular momentum, and charge to changes in its energy. The second law says the surface area of a black hole never decreases on its own. Finally, the third law says that the surface gravity of a black hole is never zero. These laws are mathematical analogs of the laws of thermodynamics. They are not equivalent, however, because, according to general relativity without quantum mechanics, a black hole can never emit radiation, and thus its temperature must always be zero.: 11 Quantum mechanics predicts that a black hole will continuously emit thermal Hawking radiation, and therefore must always have a nonzero temperature. It also predicts that all black holes have entropy which scales with their surface area. When quantum mechanics is accounted for, the laws of black hole mechanics become equivalent to the classical laws of thermodynamics. However, these conclusions are derived without a complete theory of quantum gravity, although many potential theories do predict black holes having entropy and temperature. Thus, the true quantum nature of black hole thermodynamics continues to be debated.: 29 Observational evidence Millions of black holes with around 30 solar masses derived from stellar collapse are expected to exist in the Milky Way. Even a dwarf galaxy like Draco should have hundreds. Only a few of these have been detected. By nature, black holes do not themselves emit any electromagnetic radiation other than the hypothetical Hawking radiation, so astrophysicists searching for black holes must generally rely on indirect observations. The defining characteristic of a black hole is its event horizon. The horizon itself cannot be imaged, so all other possible explanations for these indirect observations must be considered and eliminated before concluding that a black hole has been observed.: 11 The Event Horizon Telescope (EHT) is a global system of radio telescopes capable of directly observing a black hole shadow. The angular resolution of a telescope is based on its aperture and the wavelengths it is observing. Because the angular diameters of Sagittarius A* and Messier 87* in the sky are very small, a single telescope would need to be about the size of the Earth to clearly distinguish their horizons using radio wavelengths. By combining data from several different radio telescopes around the world, the Event Horizon Telescope creates an effective aperture the diameter size of the Earth. The EHT team used imaging algorithms to compute the most probable image from the data in its observations of Sagittarius A* and M87*. Gravitational-wave interferometry can be used to detect merging black holes and other compact objects. In this method, a laser beam is split down two long arms of a tunnel. The laser beams reflect off of mirrors in the tunnels and converge at the intersection of the arms, cancelling each other out. However, when a gravitational wave passes, it warps spacetime, changing the lengths of the arms themselves. Since each laser beam is now travelling a slightly different distance, they do not cancel out and produce a recognizable signal. Analysis of the signal can give scientists information about what caused the gravitational waves. Since gravitational waves are very weak, gravitational-wave observatories such as LIGO must have arms several kilometers long and carefully control for noise from Earth to be able to detect these gravitational waves. Since the first measurements in 2016, multiple gravitational waves from black holes have been detected and analyzed. The proper motions of stars near the centre of the Milky Way provide strong observational evidence that these stars are orbiting a supermassive black hole. Since 1995, astronomers have tracked the motions of 90 stars orbiting an invisible object coincident with the radio source Sagittarius A*. In 1998, by fitting the motions of the stars to Keplerian orbits, the astronomers were able to infer that Sagittarius A* must be a 2.6×106 M☉ object must be contained within a radius of 0.02 light-years. Since then, one of the stars—called S2—has completed a full orbit. From the orbital data, astronomers were able to refine the calculations of the mass of Sagittarius A* to 4.3×106 M☉, with a radius of less than 0.002 light-years. This upper limit radius is larger than the Schwarzschild radius for the estimated mass, so the combination does not prove Sagittarius A* is a black hole. Nevertheless, these observations strongly suggest that the central object is a supermassive black hole as there are no other plausible scenarios for confining so much invisible mass into such a small volume. Additionally, there is some observational evidence that this object might possess an event horizon, a feature unique to black holes. The Event Horizon Telescope image of Sagittarius A*, released in 2022, provided further confirmation that it is indeed a black hole. X-ray binaries are binary systems that emit a majority of their radiation in the X-ray part of the electromagnetic spectrum. These X-ray emissions result when a compact object accretes matter from an ordinary star. The presence of an ordinary star in such a system provides an opportunity for studying the central object and to determine if it might be a black hole. By measuring the orbital period of the binary, the distance to the binary from Earth, and the mass of the companion star, scientists can estimate the mass of the compact object. The Tolman-Oppenheimer-Volkoff limit (TOV limit) dictates the largest mass a nonrotating neutron star can be, and is estimated to be about two solar masses. While a rotating neutron star can be slightly more massive, if the compact object is much more massive than the TOV limit, it cannot be a neutron star and is generally expected to be a black hole. The first strong candidate for a black hole, Cygnus X-1, was discovered in this way by Charles Thomas Bolton, Louise Webster, and Paul Murdin in 1972. Observations of rotation broadening of the optical star reported in 1986 lead to a compact object mass estimate of 16 solar masses, with 7 solar masses as the lower bound. In 2011, this estimate was updated to 14.1±1.0 M☉ for the black hole and 19.2±1.9 M☉ for the optical stellar companion. X-ray binaries can be categorized as either low-mass or high-mass; This classification is based on the mass of the companion star, not the compact object itself. In a class of X-ray binaries called soft X-ray transients, the companion star is of relatively low mass, allowing for more accurate estimates of the black hole mass. These systems actively emit X-rays for only several months once every 10–50 years. During the period of low X-ray emission, called quiescence, the accretion disk is extremely faint, allowing detailed observation of the companion star. Numerous black hole candidates have been measured by this method. Black holes are also sometimes found in binaries with other compact objects, such as white dwarfs, neutron stars, and other black holes. The centre of nearly every galaxy contains a supermassive black hole. The close observational correlation between the mass of this hole and the velocity dispersion of the host galaxy's bulge, known as the M–sigma relation, strongly suggests a connection between the formation of the black hole and that of the galaxy itself. Astronomers use the term active galaxy to describe galaxies with unusual characteristics, such as unusual spectral line emission and very strong radio emission. Theoretical and observational studies have shown that the high levels of activity in the centers of these galaxies, regions called active galactic nuclei (AGN), may be explained by accretion onto supermassive black holes. These AGN consist of a central black hole that may be millions or billions of times more massive than the Sun, a disk of interstellar gas and dust called an accretion disk, and two jets perpendicular to the accretion disk. Although supermassive black holes are expected to be found in most AGN, only some galaxies' nuclei have been more carefully studied in attempts to both identify and measure the actual masses of the central supermassive black hole candidates. Some of the most notable galaxies with supermassive black hole candidates include the Andromeda Galaxy, Messier 32, Messier 87, the Sombrero Galaxy, and the Milky Way itself. Another way black holes can be detected is through observation of effects caused by their strong gravitational field. One such effect is gravitational lensing: The deformation of spacetime around a massive object causes light rays to be deflected, making objects behind them appear distorted. When the lensing object is a black hole, this effect can be strong enough to create multiple images of a star or other luminous source. However, the distance between the lensed images may be too small for contemporary telescopes to resolve—this phenomenon is called microlensing. Instead of seeing two images of a lensed star, astronomers see the star brighten slightly as the black hole moves towards the line of sight between the star and Earth and then return to its normal luminosity as the black hole moves away. The turn of the millennium saw the first 3 candidate detections of black holes in this way, and in January 2022, astronomers reported the first confirmed detection of a microlensing event from an isolated black hole. This was also the first determination of an isolated black hole mass, 7.1±1.3 M☉. Alternatives While there is a strong case for supermassive black holes, the model for stellar-mass black holes assumes of an upper limit for the mass of a neutron star: objects observed to have more mass are assumed to be black holes. However, the properties of extremely dense matter are poorly understood. New exotic phases of matter could allow other kinds of massive objects. Quark stars would be made up of quark matter and supported by quark degeneracy pressure, a form of degeneracy pressure even stronger than neutron degeneracy pressure. This would halt gravitational collapse at a higher mass than for a neutron star. Even stronger stars called electroweak stars would convert quarks in their cores into leptons, providing additional pressure to stop the star from collapsing. If, as some extensions of the Standard Model posit, quarks and leptons are made up of the even-smaller fundamental particles called preons, a very compact star could be supported by preon degeneracy pressure. While none of these hypothetical models can explain all of the observations of stellar black hole candidates, a Q star is the only alternative which could significantly exceed the mass limit for neutron stars and thus provide an alternative for supermassive black holes.: 12 A few theoretical objects have been conjectured to match observations of astronomical black hole candidates identically or near-identically, but which function via a different mechanism. A dark energy star would convert infalling matter into vacuum energy; This vacuum energy would be much larger than the vacuum energy of outside space, exerting outwards pressure and preventing a singularity from forming. A black star would be gravitationally collapsing slowly enough that quantum effects would keep it just on the cusp of fully collapsing into a black hole. A gravastar would consist of a very thin shell and a dark-energy interior providing outward pressure to stop the collapse into a black hole or formation of a singularity; It could even have another gravastar inside, called a 'nestar'. Open questions According to the no-hair theorem, a black hole is defined by only three parameters: its mass, charge, and angular momentum. This seems to mean that all other information about the matter that went into forming the black hole is lost, as there is no way to determine anything about the black hole from outside other than those three parameters. When black holes were thought to persist forever, this information loss was not problematic, as the information can be thought of as existing inside the black hole. However, black holes slowly evaporate by emitting Hawking radiation. This radiation does not appear to carry any additional information about the matter that formed the black hole, meaning that this information is seemingly gone forever. This is called the black hole information paradox. Theoretical studies analyzing the paradox have led to both further paradoxes and new ideas about the intersection of quantum mechanics and general relativity. While there is no consensus on the resolution of the paradox, work on the problem is expected to be important for a theory of quantum gravity.: 126 Observations of faraway galaxies have found that ultraluminous quasars, powered by supermassive black holes, existed in the early universe as far as redshift z ≥ 7 {\displaystyle z\geq 7} . These black holes have been assumed to be the products of the gravitational collapse of large population III stars. However, these stellar remnants were not massive enough to produce the quasars observed at early times without accreting beyond the Eddington limit, the theoretical maximum rate of black hole accretion. Physicists have suggested a variety of different mechanisms by which these supermassive black holes may have formed. It has been proposed that smaller black holes may have also undergone mergers to produce the observed supermassive black holes. It is also possible that they were seeded by direct-collapse black holes, in which a large cloud of hot gas avoids fragmentation that would lead to multiple stars, due to low angular momentum or heating from a nearby galaxy. Given the right circumstances, a single supermassive star forms and collapses directly into a black hole without undergoing typical stellar evolution. Additionally, these supermassive black holes in the early universe may be high-mass primordial black holes, which could have accreted further matter in the centers of galaxies. Finally, certain mechanisms allow black holes to grow faster than the theoretical Eddington limit, such as dense gas in the accretion disk limiting outward radiation pressure that prevents the black hole from accreting. However, the formation of bipolar jets prevent super-Eddington rates. In fiction Black holes have been portrayed in science fiction in a variety of ways. Even before the advent of the term itself, objects with characteristics of black holes appeared in stories such as the 1928 novel The Skylark of Space with its "black Sun" and the "hole in space" in the 1935 short story Starship Invincible. As black holes grew to public recognition in the 1960s and 1970s, they began to be featured in films as well as novels, such as Disney's The Black Hole. Black holes have also been used in works of the 21st century, such as Christopher Nolan's science fiction epic Interstellar. Authors and screenwriters have exploited the relativistic effects of black holes, particularly gravitational time dilation. For example, Interstellar features a black hole planet with a time dilation factor of over 60,000:1, while the 1977 novel Gateway depicts a spaceship approaching but never crossing the event horizon of a black hole from the perspective of an outside observer due to time dilation effects. Black holes have also been appropriated as wormholes or other methods of faster-than-light travel, such as in the 1974 novel The Forever War, where a network of black holes is used for interstellar travel. Additionally, black holes can feature as hazards to spacefarers and planets: A black hole threatens a deep-space outpost in 1978 short story The Black Hole Passes, and a binary black hole dangerously alters the orbit of a planet in the 2018 Netflix reboot of Lost in Space. Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/File:AlNurMosqueLodJune202023_02.jpg] | [TOKENS: 100]
File:AlNurMosqueLodJune202023 02.jpg Summary Licensing File history Click on a date/time to view the file as it appeared at that time. File usage The following page uses this file: Global file usage The following other wikis use this file: Metadata This file contains additional information, probably added from the digital camera or scanner used to create or digitize it. If the file has been modified from its original state, some details may not fully reflect the modified file.
========================================
[SOURCE: https://en.wikipedia.org/wiki/Fragmentation_(computing)] | [TOKENS: 3028]
Contents Fragmentation (computing) In computer storage, fragmentation is a phenomenon in the computer system which involves the distribution of data in to smaller pieces which storage space, such as computer memory or a hard drive, is used inefficiently, reducing capacity or performance and often both. The exact consequences of fragmentation depend on the specific system of storage allocation in use and the particular form of fragmentation. In many cases, fragmentation leads to storage space being "wasted", and programs will tend to run inefficiently due to the shortage of memory. Basic principle In main memory fragmentation, when a computer program requests blocks of memory from the computer system, the blocks are allocated in chunks. When the computer program is finished with a chunk, it can free it back to the system, making it available to later be allocated again to another or the same program. The size and the amount of time a chunk is held by a program varies. During its lifespan, a computer program can request and free many chunks of memory. Fragmentation can occur when a block of memory is requested by a program, and is allocated to that program, but the program has not freed (uncommitted) it. This leads to theoretically "available", unused memory, being marked as allocated - which reduces the amount of globally available memory, making it harder for programs to request and access memory. When a program is started, the free memory areas are long and contiguous. Over time and with use, the long contiguous regions become fragmented into smaller and smaller contiguous areas. Eventually, it may become impossible for the program to obtain large contiguous chunks of memory. Types Fragmentation occurs whenever a piece of data or free space is broken into several pieces. Fragmentation is often accepted in return for improvements in speed or simplicity. Analogous phenomena occur for other resources such as processors; see below. Memory fragmentation can be internal and external. Internal fragmentation occurs when memory allocation provides more space than needed, causing waste of free space. External fragmentation occurs when free memory is fragmented, preventing newer, larger blocks to be allocated. In modern operating systems, the program receives fixed-size pages which it then allocates among different parts of itself (often by an allocator such as malloc). As a result, there are two layers of allocation that may each result in their own internal and external fragmentation: on the level of the memory allocator (e.g. malloc) and on the level of pages. (Older operating systems might not use pages, but programs still tend to use a two-level system to avoid excessive calls to the OS.) Due to the rules governing memory allocation, more computer memory is sometimes allocated than is needed. When this happens, the excess memory goes to waste. In this scenario, the unusable memory, known as slack space, is contained within an allocated region. This arrangement, termed fixed partitions, suffers from inefficient memory use - any process, no matter how small, occupies an entire partition. This waste is called internal fragmentation. Internal fragmentation on the allocator level is very difficult to reclaim after it has occurred; usually the best way to remove it is with a design change. For example, in dynamic memory allocation, memory pools drastically cut internal fragmentation by spreading the space overhead over a larger number of objects. Internal fragmentation on the page level (which is external fragmentation on the allocator level) can be reclaimed by moving allocated blocks. This requires editing pointers to reflect the moving of allocated blocks, which is difficult for non-garbage-collected languages. Compacting garbage collectors perform pointer fixups as it compacts the virtual address space. External fragmentation arises when free memory is separated into small blocks and is interspersed by allocated memory. It is a weakness of certain storage allocation algorithms, when they fail to order memory used by programs efficiently. The result is that, although free storage is available, it is effectively unusable because it is divided into pieces that are too small individually to satisfy the demands of the application. The term "external" refers to the fact that the unusable storage is outside the allocated regions. For example, consider a situation wherein a program allocates three continuous blocks of memory and then frees the middle block. The memory allocator can use this free block of memory for future allocations. However, it cannot use this block if the memory to be allocated is larger in size than this free block. External fragmentation on the page level can be reclaimed by an application's own allocator or garbage collector as it frees up and hands back (uncommits) pages to the OS. External fragmentation on the page level can also be reclaimed by the operating system by moving pages in the physical address space and editing the page table to match. This is called page migration. The application is oblivious to the process because the virtual memory addresses remain unchanged. Data fragmentation occurs when a collection of data in memory is broken up into many pieces that are not close together. File systems interact with block devices such as disk drives. These devices manage data in small, fixed-sized units called blocks or sectors, which filesystems abstract into their own units called blocks or clusters. When a file system is created, there is free space to store file blocks together contiguously. This allows for rapid sequential file reads and writes. However, as files are added, removed, and changed in size, the free space becomes externally fragmented, leaving only small holes in which to place new data. This is external fragmentation. File systems differ from memory in that they are designed with fragmentation in mind: when a new file is written, or when an existing file is extended, the operating system puts the new data in new non-contiguous groups of clusters (extents) to fit into the available holes. The new data blocks are necessarily scattered, slowing access due to seek time and rotational latency of the read/write head, and incurring additional overhead to manage additional locations. This is called file system fragmentation. The effect is even worse if a file which is divided into many small pieces is deleted, because this leaves similarly small regions of free spaces. When writing a new file of a known size, if there are any empty holes that are larger than that file, the operating system can avoid data fragmentation by putting the file into any one of those holes. There are a variety of algorithms for selecting which of those potential holes to put the file; each of them is a heuristic approximate solution to the bin packing problem. The "best fit" algorithm chooses the smallest hole that is big enough. The "worst fit" algorithm chooses the largest hole. The "first-fit algorithm" chooses the first hole that is big enough. The "next fit" algorithm keeps track of where each file was written. The "next fit" algorithm is faster than "first fit," which is in turn faster than "best fit," which is the same speed as "worst fit". Just as compaction can eliminate external fragmentation, (external) data fragmentation can be eliminated by rearranging data storage so that related pieces are close together. For example, the primary job of a defragmentation tool is to rearrange blocks on disk so that the blocks of each file are contiguous. Most defragmenting utilities also attempt to reduce or eliminate free space fragmentation. Some moving garbage collectors, utilities that perform automatic memory management, will also move related objects close together (this is called compacting) to improve cache performance. There are four kinds of systems that never experience (external) data fragmentation—they always store every file contiguously. All four kinds have significant disadvantages compared to systems that allow at least some temporary data fragmentation: Internal fragmentation is rarely a concern on file systems because it does not have a global effect of making the entire file system's file allocation worse. It does cause a storage space overhead and slow down access to large sets of affected small files (the set as a whole can be considered "fragmented"). To combat this problem, filesystems such as ext4 and btrfs have an inline data or inline file feature to forgo allocating a cluster for these files and instead directly store it in the inode (file record). Compared to external fragmentation, overhead and internal fragmentation account for little loss in terms of wasted memory and reduced performance. It is defined as: Fragmentation of 0% means that all the free memory is in a single large block; fragmentation is 90% (for example) when 100 MB free memory is present but largest free block of memory for storage is just 10 MB. External fragmentation tends to be less of a problem in file systems than in primary memory (RAM) storage systems, because programs usually require their RAM storage requests to be fulfilled with contiguous blocks, but file systems typically are designed to be able to use any collection of available blocks (fragments) to assemble a file which logically appears contiguous. Therefore, if a highly fragmented file or many small files are deleted from a full volume and then a new file with size equal to the newly freed space is created, the new file will simply reuse the same fragments that were freed by the deletion. If what was deleted was one file, the new file will be just as fragmented as that old file was, but in any case there will be no barrier to using all the (highly fragmented) free space to create the new file. In RAM, on the other hand, the storage systems used often cannot assemble a large block to meet a request from small noncontiguous free blocks, and so the request cannot be fulfilled and the program cannot proceed to do whatever it needed that memory for (unless it can reissue the request as a number of smaller separate requests). Problems The most severe problem caused by fragmentation is causing a process or system to fail, due to premature resource exhaustion: if a contiguous block must be stored and cannot be stored, failure occurs. Fragmentation causes this to occur even if there is enough of the resource, but not a contiguous amount. For example, if a computer has 4 GiB of memory and 2 GiB are free, but the memory is fragmented in an alternating sequence of 1 MiB used, 1 MiB free, then a request for 1 contiguous GiB of memory cannot be satisfied even though 2 GiB total are free. In order to avoid this, the allocator may, instead of failing, trigger a defragmentation (or memory compaction cycle) or other resource reclamation, such as a major garbage collection cycle, in the hope that it will then be able to satisfy the request. This allows the process to proceed, but can severely impact performance. Fragmentation causes performance degradation for a number of reasons. Most basically, fragmentation increases the work required to allocate and access a resource. For example, on a hard drive or tape drive, sequential data reads are very fast, but seeking to a different address is slow, so reading or writing a fragmented file requires numerous seeks and is thus much slower, in addition to causing greater wear on the device. Further, if a resource is not fragmented, allocation requests can simply be satisfied by returning a single block from the start of the free area. However, if it is fragmented, the request requires either searching for a large enough free block, which may take a long time, or fulfilling the request by several smaller blocks (if this is possible), which results in this allocation being fragmented, and requiring additional overhead to manage the several pieces. A subtler problem is that fragmentation may prematurely exhaust a cache, causing thrashing, due to caches holding blocks, not individual data. For example, suppose a program has a working set of 256 KiB, and is running on a computer with a 256 KiB cache (say L2 instruction+data cache), so the entire working set fits in cache and thus executes quickly, at least in terms of cache hits. Suppose further that it has 64 translation lookaside buffer (TLB) entries, each for a 4 KiB page: each memory access requires a virtual-to-physical translation, which is fast if the page is in cache (here TLB). If the working set is unfragmented, then it will fit onto exactly 64 pages (the page working set will be 64 pages), and all memory lookups can be served from cache. However, if the working set is fragmented, then it will not fit into 64 pages, and execution will slow due to thrashing: pages will be repeatedly added and removed from the TLB during operation. Thus cache sizing in system design must include margin to account for fragmentation. Memory fragmentation is one of the most severe problems faced by system managers.[citation needed] Over time, it leads to degradation of system performance. Eventually, memory fragmentation may lead to complete loss of (application-usable) free memory. Memory fragmentation is a kernel programming level problem. During real-time computing of applications, fragmentation levels can reach as high as 99%, and may lead to system crashes or other instabilities.[citation needed] This type of system crash can be difficult to avoid, as it is impossible to anticipate the critical rise in levels of memory fragmentation. However, while it may not be possible for a system to continue running all programs in the case of excessive memory fragmentation, a well-designed system should be able to recover from the critical fragmentation condition by moving in some memory blocks used by the system itself in order to enable consolidation of free memory into fewer, larger blocks, or, in the worst case, by terminating some programs to free their memory and then defragmenting the resulting sum total of free memory. This will at least avoid a true crash in the sense of system failure and allow the system to continue running some programs, save program data, etc. Fragmentation is a phenomenon of system software design; different software will be susceptible to fragmentation to different degrees, and it is possible to design a system that will never be forced to shut down or kill processes as a result of memory fragmentation. Analogous phenomena While fragmentation is best known as a problem in memory allocation, analogous phenomena occur for other resources, notably processors. For example, in a system that uses time-sharing for preemptive multitasking, but that does not check if a process is blocked, a process that executes for part of its time slice but then blocks and cannot proceed for the remainder of its time slice wastes time because of the resulting internal fragmentation of time slices. More fundamentally, time-sharing itself causes external fragmentation of processes due to running them in fragmented time slices, rather than in a single unbroken run. The resulting cost of process switching and increased cache pressure from multiple processes using the same caches can result in degraded performance. In concurrent systems, particularly distributed systems, when a group of processes must interact in order to progress, if the processes are scheduled at separate times or on separate machines (fragmented across time or machines), the time spent waiting for each other or in communicating with each other can severely degrade performance. Instead, performant systems require coscheduling of the group. Some flash file systems have several different kinds of internal fragmentation involving "dead space" and "dark space". See also References Sources
========================================
[SOURCE: https://en.wikipedia.org/wiki/Myxobolus_cerebralis] | [TOKENS: 4244]
Contents Myxobolus cerebralis Myxosoma cerebralisTriactinomyxon dubiumTriactinomyxon gyrosalmo Myxobolus cerebralis is a myxosporean parasite of salmonids (salmon and trout species) that causes whirling disease in farmed salmon and trout and also in wild fish populations. It was first described in rainbow trout in Germany in 1893, but its range has spread and it has appeared in most of Europe (including Russia), the United States, South Africa, Canada and other countries from shipments of cultured and wild fish. In the 1980s, M. cerebralis was found to require a tubificid oligochaete (a kind of segmented worm) to complete its life cycle. The parasite infects its hosts with its cells after piercing them with polar filaments ejected from nematocyst-like capsules. This infects the cartilage and possibly the nervous tissue of salmonids, causing a potentially lethal infection in which the host develops a black tail, spinal deformities, and possibly more deformities in the anterior part of the fish.[citation needed] Whirling disease affects juvenile fish (fingerlings and fry) and causes skeletal deformation and neurological damage. Fish "whirl" forward in an awkward, corkscrew-like pattern instead of swimming normally, find feeding difficult, and are more vulnerable to predation. The mortality rate is high for fingerlings, up to 90% of infected populations, and those that do survive are deformed by the parasites residing in their cartilage, bone, and neurological tissue. They act as a reservoir for the parasite, which is released into water following the fish's death. M. cerebralis is one of the most economically important myxozoans in fish, as well as one of the most pathogenic. It was the first myxosporean whose pathology and symptoms were described scientifically. The parasite is not transmissible to humans. The taxonomy and naming of both M. cerebralis, and of myxozoans in general, have complicated histories. It was originally thought to infect fish brains (hence the specific epithet cerebralis) and nervous systems, though it soon was found to primarily infect cartilage, skeletal tissue, and nervous tissue. Attempts to change the name to Myxobolus chondrophagus, which would more accurately describe the organism, failed because of nomenclature rules[which?].[citation needed] Later, the organisms previously called Triactinomyxon dubium and T. gyrosalmo (class Actinosporea) were found to be, in fact, triactinomyxon stages of M. cerebralis, the life cycle of which was expanded to include the triactinomyxon stage. Similarly, other actinosporeans were folded into the life cycles of various myxosporeans.[citation needed] Taxonomy M. cerebralis is one of the 1,350 known myxozoan parasites known to infect fish. Once thought to be a species of Protozoa, taxonomists noticed characteristics that more closely related M. cerebralis to the phylum Cnidaria. These features included cnidocysts, which are tentacles that are used to hold onto and prey upon the host. M. cerebralis has many diverse stages ranging from single cells to relatively large spores, not all of which have been studied in detail. This complex lifecycle involves two different hosts and numerous developmental stages. These stages happen through mitosis, endogeny, plasmotomy, or possibly meiosis. In the first part of its lifecycle, M. cerebralis is attached to its salmonid host externally. They then use their stinging tentacles to infect the host, causing the skeletal tissues and nervous system to become deformed. Today, the myxozoans, previously thought to be multicellular protozoans, are considered animals by most scientists, though their status has not officially changed. Recent molecular studies suggest they are related to Bilateria or Cnidaria, with Cnidaria being closer morphologically because both groups have extrusive filaments. Bilateria were somewhat closer in some genetic studies, but those were found to have used samples that were contaminated by material from the host organism, and a 2015 study confirms they are cnidarians.[citation needed] Morphology M. cerebralis has many diverse stages ranging from single cells to relatively large spores, not all of which have been studied in detail. The stages that infect fish, called triactinomyxon spores, are made of a single style that is about 150 micrometers (μm) long and three processes or "tails", each about 200 micrometers long. These spores are typically oval shaped, and display asymmetrical symmetry. A sporoplasm packet at the end of the style contains 64 germ cells surrounded by a cellular envelope. There are also three polar capsules, each of which contains a coiled polar filament between 170 and 180 μm long, with about 5–6 coils in each filament. Polar filaments in both this stage and in the myxospore stage (see picture above) rapidly shoot into the body of the host, creating an opening through which the sporoplasm can enter. When it develops this polar filament it is able to attach to its host. Upon contact with fish hosts and firing of the polar capsules, the sporoplasm contained within the central style of the triactinomyxon migrates into the epithelium or gut lining. Firstly, this sporoplasm undergoes mitosis to produce more amoeboid cells, which migrate into deeper tissue layers, to reach the cerebral cartilage. Myxospores, which develop from sporogonic cell stages inside fish hosts, are lenticular. They have a diameter of about 10 micrometers and are made of six cells. Two of these cells form polar capsules, two merge to form a binucleate sporoplasm, and two form protective valves. Myxospores are infective to oligochaetes, and are found among the remains of digested fish cartilage. They are often difficult to distinguish from related species because of morphological similarities across genera. Though M. cerebralis is the only myxosporean ever found in salmonid cartilage, other visually similar species may be present in the skin, nervous system, or muscle. Life cycle Myxobolus cerebralis has a two-host life cycle involving a salmonid fish and a tubificid oligochaete. So far, the only worm known to be susceptible to M. cerebralis infection is Tubifex tubifex, though what scientists currently call T. tubifex may in fact be more than one species. First, myxospores are ingested by tubificid worms. In the gut lumen of the worm, the spores extrude their polar capsules and attach to the gut epithelium by polar filaments. The shell valves then open along the suture line and the binucleate germ cell penetrates between the intestinal epithelial cells of the worm. This cell multiplies, producing many amoeboid cells by an asexual cell fission process called merogony. As a result of the multiplication process, the intercellular space of the epithelial cells in more than 10 neighbouring worm segments may become infected. Around 60–90 days postinfection, sexual cell stages of the parasite undergo sporogenesis, and develop into pansporocysts, each of which contains eight triactinomyxon-stage spores. These spores are released from the oligochaete anus into the water. Alternatively, a fish can become infected by eating an infected oligochaete. Infected tubificids can release triactinomyxons for at least a year. The triactinomyxon spores are carried by the water currents, where they can infect a salmonid through the skin. Penetration of the fish by these spores takes only a few seconds. Within five minutes, a sac of germ cells called a sporoplasm has entered the fish epidermis, and within a few hours, the sporoplasm splits into individual cells that will spread through the fish. Within the fish, both intracellular and extracellular stages reproduce in its cartilage by asexual endogeny, meaning new cells grow from within old cells. The final stage within the fish is the creation of the myxospore, which is formed by sporogony. They are released into the environment when the fish decomposes or is eaten. Some recent research indicates some fish may expel viable myxospores while still alive. Myxospores are extremely tough: "it was shown that Myxobolus cerebralis spores can tolerate freezing at −20°C for at least 3 months, aging in mud at 13°C for at least 5 months, and passage through the guts of northern pike Esox lucius or mallards Anas platyrhynchos without loss of infectivity" to worms.[citation needed] Triactinomyxons are much shorter-lived, surviving 34 days or less, depending on temperature.[citation needed] Pathology M. cerebralis infections have been reported from a wide range of salmonid species: eight species of "Atlantic" salmonids, Salmo; four species of "Pacific" salmonids, Oncorhynchus; four species of char, Salvelinus; the grayling, Thymallus thymallus; and the huchen, Hucho hucho. M. cerebralis causes damage to its fish hosts through attachment of triactinomyxon spores and the migrations of various stages through tissues and along nerves, as well as by digesting cartilage. The fish's tail may darken, but aside from lesions on cartilage, internal organs generally appear healthy. Other symptoms include skeletal deformities and "whirling" behavior (tail-chasing) in young fish, which was thought to have been caused by a loss of equilibrium, but is actually caused by damage to the spinal cord and lower brain stem. Experiments have shown that fish can kill Myxobolus in their skin (possibly using antibodies), but that the fish do not attack the parasites once they have migrated to the central nervous system. This response varies from species to species. In T. tubifex, the release of triactinomyxon spores from the intestinal wall damages the worm's mucosa; this may happen thousands of times in a single worm, and is believed to impair nutrient absorption. Spores are released from the worm almost exclusively when the temperature is between 10 °C and 15 °C, so fish in warmer or cooler waters are less likely to be infected, and infection rates vary seasonally. Fish size, age, concentration of triactinomyxon spores, and water temperature all affect infection rates in fish, as does the species of the fish in question. The disease has the most impact on fish less than five months old because their skeletons have not ossified. This makes young fish more susceptible to deformities and provides M. cerebralis more cartilage on which to feed. In one study of seven species of many strains, brook trout and rainbow trout (except one strain) were far more heavily affected by M. cerebralis after two hours of exposure than other species were, while bull trout, Chinook salmon, brown trout, and Arctic grayling were least severely affected. While brown trout may harbor the parasite, they typically do not show any symptoms, and this species may have been M. cerebralis' original host. This lack of symptoms in brown trout meant that the parasite was only discovered after nonnative rainbow trouts were introduced in Europe. The normally uniform trout cartilage is scarred with lesions in which M. cerebralis spores develop, weakening and deforming the connective tissues. Moderate or heavy clinical infection of fish with whirling disease can be presumptively diagnosed on the basis of changes in behavior and appearance about 35 to 80 days after initial infection, though "injury or deficiency in dietary tryptophan and ascorbic acid can evoke similar signs", so conclusive diagnosis may require finding myxospores in the fish's cartilage. In heavy infections, only examining cartilage microscopically may be necessary to find spores. In less severe infections, the most common test involves digestion of the cranial cartilage with the proteases pepsin and trypsin (pepsin-trypsin digest—PTD) before looking for spores. The head and other tissues can be further examined using histopathology to confirm whether the location and morphology of the spores matches what is known for M. cerebralis. Serological identification of spores in tissue sections using an antibody raised against the spores is also possible. Parasite identity can also be confirmed using polymerase chain reaction to amplify the 415 base pair 18S rRNA gene from M. cerebralis. Fish should be screened at the life stage most susceptible to the parasites, with particular focus on fish in aquaculture units. Impact Although originally a mild pathogen of Salmo trutta in central Europe and other salmonids in northeast Asia, the introduction of the rainbow trout (Oncorhynchus mykiss) has greatly increased the impact of this parasite. Having no innate immunity to M. cerebralis, rainbow trout are particularly susceptible, and can release so many spores that even more resistant species in the same area, such as S. trutta, can become overloaded with parasites and incur 80–90% mortalities. Where M. cerebralis has become well-established, it has caused decline or even elimination of whole cohorts of fish. The impact of M. cerebralis in Europe is somewhat lessened because the species is endemic to this region, giving native fish stocks a degree of immunity. Rainbow trout, the most susceptible species to this parasite, are not native to Europe; successfully reproducing feral populations are rare, so few wild rainbow trout are young enough to be susceptible to infection. On the other hand, they are widely reared for restocking sport-fishing waters and for aquaculture, where this parasite has its greatest impact. Hatching and rearing methods designed to prevent infection of rainbow trout fry have proved successful in Europe. These techniques include hatching eggs in spore-free water and rearing fry to the "ossification" stage in tanks or raceways. These methods give particular attention to the quality of water sources to guard against spore introduction during water exchanges. Fry are moved to earthen ponds only once they are considered to be clinically resistant to the parasite, after skeletal ossification occurs. However, some Norwegian facilities have gotten outbreaks of M. cerebralis causing millions of dollars in loss. M. cerebralis was first found in New Zealand in 1971. The parasite has only been found in rivers in the South Island, away from the most important aquaculture sites. Additionally, salmonid species commercially aquacultured in New Zealand have low susceptibility to whirling disease, and the parasite has also not been shown to affect native salmonids. An important indirect effect of the parasites presence is quarantine restriction placed on exports of salmon products to Australia. M. cerebralis has been reported in nearly two dozen (green) states in the United States, according to the Whirling Disease Initiative M. cerebralis was first recorded in North America in 1956 in Pennsylvania, having been introduced via infected trout imported from Europe, and has spread steadily south and westwards. Until the 1990s, whirling disease was considered a manageable problem affecting rainbow trout in hatcheries. However, it has recently become established in natural waters of the Rocky Mountain states (Colorado, Wyoming, Utah, Montana, Idaho, New Mexico), where it is causing heavy mortalities in several sportfishing rivers. Some streams in the western United States have lost 90% of their trout. In addition, whirling disease threatens recreational fishing, which is important for the tourism industry, a key component of the economies of some U.S. western states. For example, "the Montana Whirling Disease Task Force estimated trout fishing generated US$300,000,000 in recreational expenditures in Montana alone". Making matters worse, some of the fish species that M. cerebralis infects (bull trout, cutthroat trout, and steelhead) are already threatened or endangered, and the parasite could worsen their already precarious situations. For reasons that are poorly understood, but probably have to do with environmental conditions, the impact on infected fish has been greatest in Colorado and Montana, and least in California, Michigan, and New York. Whirling disease was first confirmed in fish in Johnson Lake in Banff National Park in August 2016. CFIA Labs confirmed in August and Parks Canada announced the outbreak August 23, 2016. Although it was first discovered in Banff, it is not necessarily where the disease originated and spread. The Government of Alberta is currently sampling and testing fish in 6 different watersheds (Peace River, Athabasca, North Saskatchewan, Red Deer, Bow and Oldman) to see where the disease has spread. Initial sample fish were collected in 2016, and are currently being processed by the Government of Alberta and CFIA labs. Since testing began, it has been detected in the Upper Bow River, and in May 2017 it was confirmed that whirling disease had also been detected in the Oldman River Basin. The declaration does not mean that every susceptible finfish population within the Bow and Oldman River watersheds are infected with the disease.[citation needed] The parasite was first detected in the adjacent province of British Columbia in January 2024. As a result of the new declaration, a domestic movement permit will be required from the CFIA for susceptible species and end uses identified in the Domestic Movement Control Program, the vector Tubifex tubifex, the disease causing agent Myxobolus cerebralis, and/or related things out of the infected and buffer areas of Alberta. Recreational and sport fishing, including fishing led by a professional guide, will not require a CFIA permit. Prevention and control Some biologists have attempted to disarm triactinomyxon spores by making them fire prematurely. In the laboratory, only extreme acidity or basicity, moderate to high concentrations of salts, or electric current caused premature filament discharge; neurochemicals, cnidarian chemosensitizers, and trout mucus were ineffective, as were anesthetized or dead fish. If spores could be disarmed, they would be unable to infect fish, but further research is needed to find an effective treatment. Some strains of fish are more resistant than others, even within species; using resistant strains may help reduce the incidence and severity of whirling disease in aquaculture. There is also some circumstantial evidence that fish populations can develop resistance to the disease over time. Additionally, aquaculturists may avoid M. cerebralis infections by not using earthen ponds for raising young fish; this keeps them away from possibly infected tubificids and makes it easier to eliminate spores and oligochaetes through filtration, chlorination, and ultraviolet bombardment. To minimise tubificid populations, techniques include periodic disinfection of the hatchery or aquaculture ponds, and the rearing of small trout indoors in pathogen-free water. Smooth-faced concrete or plastic-lined raceways that are kept clean and free of contaminated water keep aquaculture facilities free of the disease. Lastly, some drugs, such as furazolidone, furoxone, benomyl, fumagillin, proguanil and clamoxyquine, have been shown to impede spore development, which reduces infection rates. For example, one study showed that feeding fumagillin to O. mykiss reduced the number of infected fish from 73–100% to 10–20%. However, this treatment is considered unsuitable for wild trout populations, and no drug treatment has ever been shown to be effective in the studies required for United States Food and Drug Administration approval. Recreational and sports fishers can help to prevent the spread of the parasite by not transporting fish from one body of water to another, not disposing of fish bones or entrails in any body of water, and ensuring boots and shoes are clean before moving between different bodies of water. Federal, state, provincial, and local regulations on the use of bait should be followed. See also Notes External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Lua] | [TOKENS: 3582]
Contents Lua Lua (/ˈluːə/ LOO-ə; from Portuguese: lua [ˈlu(w)ɐ] meaning moon) is a lightweight, high-level, multi-paradigm programming language designed mainly for embedded use in applications. Lua is cross-platform software, since the interpreter of compiled bytecode is written in ANSI C, and Lua has a relatively simple C application programming interface (API) to embed it into applications. Lua originated in 1993 as a language for extending software applications to meet the increasing demand for customization at the time. It provided the basic facilities of most procedural programming languages, but more complicated or domain-specific features were not included; rather, it included mechanisms for extending the language, allowing programmers to implement such features. As Lua was intended to be a general embeddable extension language, the designers of Lua focused on improving its speed, portability, extensibility and ease-of-use in development. History Lua was created in 1993 by Roberto Ierusalimschy, Luiz Henrique de Figueiredo, and Waldemar Celes, members of the Computer Graphics Technology Group (Tecgraf) at the Pontifical Catholic University of Rio de Janeiro, in Brazil. From 1977 until 1992, Brazil had a policy of strong trade barriers (called a market reserve) for computer hardware and software, believing that Brazil could and should produce its own hardware and software. In that climate, Tecgraf's clients could not afford, either politically or financially, to buy customized software from abroad; under the market reserve, clients would have to go through a complex bureaucratic process to prove their needs couldn't be met by Brazilian companies. Those reasons led Tecgraf to implement the basic tools it needed from scratch. Lua's predecessors were the data-description and configuration languages Simple Object Language (SOL) and Data-Entry Language (DEL). They had been independently developed at Tecgraf in 1992–1993 to add some flexibility into two different projects (both were interactive graphical programs for engineering applications at Petrobras company). There was a lack of any flow-control structures in SOL and DEL, and Petrobras felt a growing need to add full programming power to them. In The Evolution of Lua, the language's authors wrote: In 1993, the only real contender was Tcl, which had been explicitly designed to be embedded into applications. However, Tcl had unfamiliar syntax, did not offer good support for data description, and ran only on Unix platforms. We did not consider LISP or Scheme because of their unfriendly syntax. Python was still in its infancy. In the free, do-it-yourself atmosphere that then reigned in Tecgraf, it was quite natural that we should try to develop our own scripting language ... Because many potential users of the language were not professional programmers, the language should avoid cryptic syntax and semantics. The implementation of the new language should be highly portable, because Tecgraf's clients had a very diverse collection of computer platforms. Finally, since we expected that other Tecgraf products would also need to embed a scripting language, the new language should follow the example of SOL and be provided as a library with a C API. Lua 1.0 was designed in such a way that its object constructors, being then slightly different from the current light and flexible style, incorporated the data-description syntax of SOL (hence the name Lua: Sol meaning "Sun" in Portuguese, and Lua meaning "Moon"). Lua syntax for control structures was mostly borrowed from Modula (if, while, repeat/until), but also had taken influence from CLU (multiple assignments and multiple returns from function calls, as a simpler alternative to reference parameters or explicit pointers), C++ ("neat idea of allowing a local variable to be declared only where we need it"), SNOBOL and AWK (associative arrays). In an article published in Dr. Dobb's Journal, Lua's creators also state that LISP and Scheme with their single, ubiquitous data-structure mechanism (the list) were a major influence on their decision to develop the table as the primary data structure of Lua. Lua semantics have been increasingly influenced by Scheme over time, especially with the introduction of anonymous functions and full lexical scoping. Several features were added in new Lua versions. Versions of Lua prior to version 5.0 were released under a license similar to the BSD license. From version 5.0 onwards, Lua has been licensed under the MIT License. Both are permissive free software licences and are almost identical. Features Lua is commonly described as a "multi-paradigm" language, providing a small set of general features that can be extended to fit different problem types. Lua does not contain explicit support for inheritance, but allows it to be implemented with metatables. Similarly, Lua allows programmers to implement namespaces, classes and other related features using its single table implementation; first-class functions allow the employment of many techniques from functional programming and full lexical scoping allows fine-grained information hiding to enforce the principle of least privilege. In general, Lua strives to provide simple, flexible meta-features that can be extended as needed, rather than supply a feature-set specific to one programming paradigm. As a result, the base language is light; the full reference interpreter is only about 247 kB compiled and easily adaptable to a broad range of applications. As a dynamically typed language intended for use as an extension language or scripting language, Lua is compact enough to fit on a variety of host platforms. It supports only a small number of atomic data structures such as Boolean values, numbers (double-precision floating point and 64-bit integers by default) and strings. Typical data structures such as arrays, sets, lists and records can be represented using Lua's single native data structure, the table, which is essentially a heterogeneous associative array. Lua implements a small set of advanced features such as first-class functions, garbage collection, closures, proper tail calls, coercion (automatic conversion between string and number values at run time), coroutines (cooperative multitasking) and dynamic module loading. The classic "Hello, World!" program can be written as follows, with or without parentheses:[a] The declaration of a variable, without a value. The declaration of a variable with a value of 1000 (one thousand) A comment in Lua starts with a double-hyphen and runs to the end of the line, similar to Ada, Eiffel, Haskell, SQL and VHDL. Multi-line strings and comments are marked with double square brackets. The factorial function is implemented in this example: Lua has one type of conditional test: if then end with optional else and elseif then execution control constructs. The generic if then end statement requires all three keywords: An example of an if statement The else keyword may be added with an accompanying statement block to control execution when the if condition evaluates to false: An example of an if else statement Execution may also be controlled according to multiple conditions using the elseif then keywords: An example of an if elseif else statement Lua has four types of conditional loops: the while loop, the repeat loop (similar to a do while loop), the numeric for loop and the generic for loop. This generic for loop would iterate over the table _G using the standard iterator function pairs, until it returns nil: Loops can also be nested (put inside of another loop). Lua's treatment of functions as first-class values is shown in the following example, where the print function's behavior is modified: Any future calls to print will now be routed through the new function, and because of Lua's lexical scoping, the old print function will only be accessible by the new, modified print. Lua also supports closures, as demonstrated below: A new closure for the variable x is created every time addto is called, so that each new anonymous function returned will always access its own x parameter. The closure is managed by Lua's garbage collector, just like any other object. Tables are the most important data structures (and, by design, the only built-in composite data type) in Lua and are the foundation of all user-created types. They are associative arrays with addition of automatic numeric key and special syntax. A table is a set of key and data pairs, where the data is referenced by key; in other words, it is a hashed heterogeneous associative array. Tables are created using the {} constructor syntax. Tables are always passed by reference (see Call by sharing). A key (index) can be any value except nil and NaN, including functions. A table is often used as structure (or record) by using strings as keys. Because such use is very common, Lua features a special syntax for accessing such fields. By using a table to store related functions, it can act as a namespace. Tables are automatically assigned a numerical key, enabling them to be used as an array data type. The first automatic index is 1 rather than 0 as it is for many other programming languages (though an explicit index of 0 is allowed). A numeric key 1 is distinct from a string key "1". The length of a table t is defined to be any integer index n such that t[n] is not nil and t[n+1] is nil; moreover, if t is nil, n can be zero. For a regular array, with non-nil values from 1 to a given n, its length is exactly that n, the index of its last value. If the array has "holes" (that is, nil values between other non-nil values), then #t can be any of the indices that directly precedes a nil value (that is, it may consider any such nil value as the end of the array). A table can be an array of objects. Using a hash map to emulate an array is normally slower than using an actual array; however, Lua tables are optimized for use as arrays to help avoid this issue. Extensible semantics is a key feature of Lua, and the metatable allows powerful customization of tables. The following example demonstrates an "infinite" table. For any n, fibs[n] will give the n-th Fibonacci number using dynamic programming and memoization. Although Lua does not have a built-in concept of classes, object-oriented programming can be emulated using functions and tables. An object is formed by putting methods and fields in a table. Inheritance (both single and multiple) can be implemented with metatables, delegating nonexistent methods and fields to a parent object. There is no such concept as "class" with these techniques; rather, prototypes are used, similar to Self or JavaScript. New objects are created either with a factory method (that constructs new objects from scratch) or by cloning an existing object. Creating a basic vector object: Here, setmetatable tells Lua to look for an element in the Vector table if it is not present in the vec table. vec.magnitude, which is equivalent to vec["magnitude"], first looks in the vec table for the magnitude element. The vec table does not have a magnitude element, but its metatable delegates to the Vector table for the magnitude element when it's not found in the vec table. Lua provides some syntactic sugar to facilitate object orientation. To declare member functions inside a prototype table, one can use function table:func(args), which is equivalent to function table.func(self, args). Calling class methods also makes use of the colon: object:func(args) is equivalent to object.func(object, args). That in mind, here is a corresponding class with : syntactic sugar: It is possible to use metatables to mimic the behavior of class inheritance in Lua. In this example, we allow vectors to have their values multiplied by a constant in a derived class. It is also possible to implement multiple inheritance; __index can either be a function or a table. Operator overloading can also be done; Lua metatables can have elements such as __add, __sub and so on. Implementation Lua programs are not interpreted directly from the textual Lua file, but are compiled into bytecode, which is then run on the Lua virtual machine (VM). The compiling process is typically invisible to the user and is performed during run-time, especially when a just-in-time compilation (JIT) compiler is used, but it can be done offline to increase loading performance or reduce the memory footprint of the host environment by leaving out the compiler. Lua bytecode can also be produced and executed from within Lua, using the dump function from the string library and the load/loadstring/loadfile functions. Lua version 5.3.4 is implemented in approximately 24,000 lines of C code. Like most CPUs, and unlike most virtual machines (which are stack-based), the Lua VM is register-based, and therefore more closely resembles most hardware design. The register architecture both avoids excessive copying of values, and reduces the total number of instructions per function. The virtual machine of Lua 5 is one of the first register-based pure VMs to have a wide use. Parrot and Android's Dalvik are two other well-known register-based VMs. PCScheme's VM was also register-based. This example is the bytecode listing of the factorial function defined above (as shown by the luac 5.1 compiler): C API Lua is intended to be embedded into other applications, and provides a C API for this purpose. The API is divided into two parts: the Lua core and the Lua auxiliary library. The Lua API's design eliminates the need for manual reference counting (management) in C code, unlike Python's API. The API, like the language, is minimalist. Advanced functions are provided by the auxiliary library, which consists largely of preprocessor macros which assist with complex table operations. The Lua C API is stack based. Lua provides functions to push and pop most simple C data types (integers, floats, etc.) to and from the stack, and functions to manipulate tables through the stack. The Lua stack is somewhat different from a traditional stack; the stack can be indexed directly, for example. Negative indices indicate offsets from the top of the stack. For example, −1 is the top (most recently pushed value), while positive indices indicate offsets from the bottom (oldest value). Marshalling data between C and Lua functions is also done using the stack. To call a Lua function, arguments are pushed onto the stack, and then the lua_call is used to call the actual function. When writing a C function to be directly called from Lua, the arguments are read from the stack. Here is an example of calling a Lua function from C: Running this example gives: The C API also provides some special tables, located at various "pseudo-indices" in the Lua stack. At LUA_GLOBALSINDEX prior to Lua 5.2 is the globals table, _G from within Lua, which is the main namespace. There is also a registry located at LUA_REGISTRYINDEX where C programs can store Lua values for later retrieval. Besides standard library (core) modules it is possible to write extensions using the Lua API. Extension modules are shared objects which can be used to extend the functions of the interpreter by providing native facilities to Lua scripts. Lua scripts may load extension modules using require, just like modules written in Lua itself, or with package.loadlib. When a C library is loaded via require('foo') Lua will look for the function luaopen_foo and call it, which acts as any C function callable from Lua and generally returns a table filled with methods. A growing set of modules termed rocks are available through a package management system named LuaRocks, in the spirit of CPAN, RubyGems and Python eggs. Prewritten Lua bindings exist for most popular programming languages, including other scripting languages. For C++, there are a number of template-based approaches and some automatic binding generators. Applications In video game development, Lua is widely used as a scripting language, mainly due to its perceived ease of embedding, fast execution, and short learning curve. Notable games which use Lua include Roblox, Garry's Mod, World of Warcraft, Payday 2, Project Zomboid, Phantasy Star Online 2, Dota 2, Crysis, and many others. Some games that do not natively support Lua programming or scripting have this function added by mods, as ComputerCraft does for Minecraft. Similarly, Lua API libraries, like Discordia, are used for platforms that do not natively support Lua. Lua is used in the open-source 2-dimensional game engine LÖVE. Also, Lua is used in non-video game software, such as Adobe Lightroom, Moho, iClone, Aerospike, and some system software in FreeBSD and NetBSD, and used as a template scripting language on MediaWiki using the Scribunto extension. In 2003, a poll conducted by GameDev.net showed that Lua was the most popular scripting language for game programming. On 12 January 2012, Lua was announced as a winner of the Front Line Award 2011 from the magazine Game Developer in the category Programming Tools. Many non-game applications also use Lua for extensibility, such as LuaTeX, an implementation of the TeX type-setting language; Redis, a key-value database; ScyllaDB, a wide-column store, Neovim, a text editor; Nginx, a web server; Wireshark, a network packet analyzer; Discordia, a Discord API library; and Pure Data, a visual audio programming language (through the pdlua extension). Derived languages In addition, the Lua users community provides some power patches on top of the reference C implementation. See also Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Iraqi_dinar] | [TOKENS: 3254]
Contents Iraqi dinar The Iraqi dinar (code: IQD; Arabic: الدينار العراقي), Arabic pronunciation: [diːˈnɑːr]) is the currency of Iraq. The Iraqi dinar is issued by the Central Bank of Iraq (CBI). On 12 December 2025, the exchange rate with the US dollar was US$1 = 1418 dinars. History The Iraqi dinar entered circulation on 1 April 1932, replacing the Indian rupee, which had been the official currency since the British occupation of the country in World War I, at a rate of 1 dinar = 11 rupees. The dinar was pegged at par with sterling until 1959 when, without changing its value, the peg was switched to the United States dollar at the rate of IQD 1 = US$2.80. By not following the US devaluations in 1971 and 1973, the official rate rose to US$3.3778, before a 5% devaluation reduced its rate to US$3.2169, a rate which remained until the Gulf War in 1990, although in late 1989 the black market rate was reported at five to six times higher than the official rate. After the Gulf War in 1990, due to UN sanctions, Iraq was no longer able to place orders with the British security printing firm De La Rue for further issues of the previously high quality notes, so new notes were produced. The pre-1990 notes became known as Swiss dinars while the new dinar notes were called Iraqi dinars. Due to United States and the international sanctions on Iraq along with excessive government printing, the Iraqi dinar currency devalued quickly. By late 1995, US$1 was valued at 3,000 Iraqi dinars on the black market. During the Iraqi no-fly zones conflict, Swiss dinars notes continued to circulate in the then politically isolated Kurdish-populated northern Iraq. The northern Iraqi Kurdistan government that was created as a result, refused to accept the inflated Iraqi dinar notes (which were issued in huge amounts). Since the supply of Iraqi dinar notes increased while the supply of Swiss dinar notes remained stable (even decreased because of notes taken out of circulation), the Swiss dinar notes appreciated against the Iraqi dinar note. By having its own stable supply of the Swiss Iraqi dinars, the politically isolated region effectively evaded inflation, which ran rampant throughout the rest of the country. After Saddam Hussein was deposed in the 2003 invasion of Iraq, the Iraqi Governing Council and the Office for Reconstruction and Humanitarian Assistance printed more Iraqi dinar notes as a stopgap measure to maintain the money supply until a new currency could be introduced. Between 15 October 2003 and 15 January 2004, the Coalition Provisional Authority issued new Iraqi dinar notes and coins, with the notes printed by De La Rue again, using modern anti-forgery techniques to "create a single unified currency that is used throughout all of Iraq and will also make money more convenient to use in people's everyday lives". Multiple trillions of dinars were shipped to Iraq and secured in the Central Bank[clarification needed] to exchange for Iraqi dinar notes. Saddam dinar notes were exchanged for the new dinars at par, while Swiss dinar notes were exchanged at a rate of one Swiss dinar = 150 new dinars. Inflation and depreciation of the currency has continued since. On 19 December 2020, Iraq's Central Bank devalued the dinar by 24% to improve the government's revenue, which was affected by the COVID-19 pandemic and low oil prices. On 2 March 2019, the Central Bank's indicative exchange rate was IQD 1,190 = US$1. and on 18 June 2021 it was IQD 1,460.5000 = US$1. There is considerable confusion (perhaps intentional on the part of dinar sellers) around the role of the International Monetary Fund in Iraq. The IMF as part of the rebuilding of Iraq is monitoring Iraq's finances and for this purpose uses a single rate (not a sell/buy) of IQD 1170 per US$. This "program rate" is used for calculations in the IMF monitoring program and is not a rate imposed on Iraq by the IMF. For a wider history surrounding currency in the region, see British currency in the Middle East. There is little international demand for dinars, since Iraq has few exports other than oil, which is sold in US dollars. Thus there is often an extremely high exchange rate for dinars compared with other currencies. However, the downfall of Saddam Hussein resulted in the development of a multi-million-dollar industry involving the sale of dinars to speculators. Such exchange services and companies sell dinars at an inflated price, pushing the idea that the dinar would sharply increase in value to a profitable exchange rate some time in the future, instead of being redenominated. This activity can be either a legitimate service to currency speculators, or foreign exchange fraud: at least one major such currency exchange provider was convicted of fraud involving the dinar. This trade revived after the election of Donald Trump in November 2016, with many buyers believing that Trump would cause a sharp revaluation in the dinar (often referred to by the abbreviation "RV" by supporters of the dinar trade,) to an exchange rate comparable to the US dollar. In 2014, Keith Woodwell (director of the Utah Division of Securities) and Mike Rothschild (writer for Skeptoid blog) stated that the speculation over the Iraqi dinar originated from a misunderstanding of why the value of the Kuwaiti dinar recovered after the First Gulf War, leading to an assumption that the Iraqi dinar would follow suit after the fall of Saddam: Woodwell and Rothschild noted substantial differences in economic and political stability between Iraq and Kuwait, with Iraq facing pervasive sectarian violence amid near-total reliance on oil exports. In response to the growing concerns about fraud and scams related to investment in the Iraqi dinar, State agencies such as Washington State, Utah, Oklahoma, Alabama and others issued statements and releases warning potential investors. Further alerts were issued by news agencies. These alerts usually warn potential investors that there is no place outside Iraq to exchange the dinar, that they are typically sold by dealers at inflated prices, and that there is little evidence to substantiate the claims of significant appreciation of their investment due to revaluation of the currency. In February 2014, the Better Business Bureau included investing in the dinar as one of the ten most notable scams in 2013. Coins Coins were introduced in 1931 and 1932 in denominations of round 1 and 2 fils in bronze, and scalloped 4 and 10 fils in nickel. 20, 50, and 200 fils were 50% silver. The 200 fils coin is also known as a rial. Bronze substituted nickel in the 5 and 10 fils from 1938 to 1943 during the World War II period and reverted to nickel in 1953. Silver 100 fils coins were also introduced in 1953. These coins first depicted King Faisal I from 1931 to 1933, King Ghazi from 1938, and King Faisal II from 1943 until the end of the kingdom. Following the establishment of the Iraqi Republic, a new series of coins was introduced in denominations of 1, 5, 10, 25, 50, and 100 fils, with the 25, 50, and 100 fils in silver until 1969. In this series an allegorical sun replaced the image of the king, shapes and sizes remained the same with the exception of the 1 fil which was decagon shaped. This image was then replaced by three palms in 1968. In 1970, 250 fils pieces were introduced, followed by 500 fils and IQD 1 coins in 1982. A number of the coins for 1982 were a commemorative series celebrating Babylonian achievements. During this period, many of the coins were identified by their shape due to being made of similar composition metals, as from 1980 onward 250 fils were octagonal, 500 fils square, and IQD 1 decagon shaped. Coin production ceased after 1990 due to the emergency conditions generated by the Gulf War and international sanctions. In 2004, a new series of coins were issued in denominations of IQD 25, IQD 50 and IQD 100 and were struck in bronze, brass, and nickel-plated steel respectively. They are sparse in design and depict an abstract map of Iraq and the main rivers. Banknotes On 16 March 1932, banknotes were issued by the government in denominations of 1⁄4, 1⁄2, 1, 5, 10 and 100 dinars. The notes were printed in the United Kingdom by Bradbury, Wilkinson & Co. From 1932 to 1947, the banknotes were issued by the Iraqi currency board for the government of Iraq and banknotes were convertible into pound sterling. From 1947, the banknotes were issued by the National Bank of Iraq, then after 1954 by the Central Bank of Iraq. 100 dinars notes ceased production in the 1940s, however, the same denominations were used until 1978, when IQD 25 notes were introduced. In 1991, IQD 50 were introduced and IQD 100 reintroduced, followed in 1995 by IQD 250 notes and IQD 10,000 notes in 2002. Banknotes that were issued between 1990 and October 2003, along with an IQD 25 note issued in 1986, bear an idealized engraving of former Iraqi President Saddam Hussein. Following the 1991 Gulf War, Iraq's currency was printed both locally and in China, using poor grade wood pulp paper (rather than cotton or linen) and inferior quality lithography (some notes were reputedly printed on presses designed for printing newspapers). The primitive printing techniques resulted in a limitless variety in coloration and detail, one layer of the printing would be too faint while another would be too dark. Counterfeit banknotes often appeared to be of better quality than real notes[citation needed]. Some notes were very poorly cut, and some notes even lacked serial numbers. Despite the collapse in the value of the Iraqi dinar, the highest denomination printed until 2002 was IQD 250. In 2002, the Central Bank of Iraq issued an IQD 10,000 banknote to be used for "larger, and inter-bank transactions". This note was rarely accepted in practice due to fears of looting and counterfeiting. This forced people to carry around stacks of IQD 250 note for everyday use. The other, smaller notes were so worthless that they largely fell into disuse. This situation meant that Iraq, for the most part, had only one denomination of currency in wide circulation. Currency printed before the Gulf War was often called the Swiss dinar, a term of obscure and uncertain origins. These notes were manufactured in England by De La Rue and were of significantly higher quality than those later produced under the economic sanctions that were imposed after the first Gulf War. After a change-over period, this currency was unendorsed by the Iraqi government. However, this old currency still circulated in Kurdish-populated parts of northern Iraq until it was replaced with the new dinar after the second Gulf War. During this time the Swiss dinar retained its value, whilst the new currency consistently lost value at sometimes 30% per annum. In 2003, new banknotes were issued consisting of six denominations: IQD 50, IQD 250, IQD 1,000, IQD 5,000, IQD 10,000, and IQD 25,000. The notes were similar in design to notes issued by the Central Bank of Iraq (CBI) in the 1970s and 1980s. An IQD 500 note was issued a year later, in October 2004. In the Kurdistan Region, the IQD 50 note is not in circulation. In March 2014, the CBI began replacing banknotes with anti-counterfeiting enhanced versions that include SPARK optical security features, scanner readable guarantee threads in addition to braille embossing to assist vision-impaired persons. In February 2015, the CBI announced the removal from circulation on 30 April 2015 of the IQD 50 notes. Persons holding these banknotes were advised to immediately redeem them at their nearest bank for the IQD 250 and higher denomination dinar notes at a one-to-one rate at no charge. In November 2015, the CBI announced the introduction of a new IQD 50,000 banknote. This is the first new denomination banknote since the new series was first issued in 2003, and also the largest ever printed by the CBI. The banknotes are printed using new security features from Giesecke & Devrient & De La Rue and measure 156 × 65 mm. They feature an outline map of Iraq showing the Euphrates and Tigris rivers as well as the Great Mosque of Samarra and the head of a purebred Arabian horse as a watermark. In 2018, the Central Bank of Iraq (CBI) released new designs for the 25,000, 10,000, 1,000, 500, and 250 dinar notes. However, the most notable change was with the 1,000-dinar note, which was redesigned after social media users noticed that Surah Al-Ikhlas was written in the center of its front side. In the new design, the Surah was replaced by the Assyrian flag. Surah Al-Ikhlas is also written on the right side of the front of the 25,000-dinar note, and it remains in the new design. Investing in the Iraqi dinar (IQD) represents a high-risk financial strategy that attracts speculative investors hoping for potential future currency appreciation. The investment involves purchasing Iraqi dinars with U.S. dollars, based on speculation about Iraq's potential economic recovery. The Iraqi dinar's value is strictly controlled by the Iraqi government and does not freely float on global forex markets. This means that even if Iraq's economic conditions improve, the currency may not automatically increase in value. Investors face numerous significant challenges, including extremely limited trading volume, high transaction fees that can reach up to 20%, and widespread scams within the currency exchange market. Financial experts consistently warn about the risks associated with dinar investments. The currency's value remains heavily dependent on Iraq's complex political landscape, ongoing economic instability, and fluctuating oil prices. Despite Iraq's substantial oil reserves, which could potentially support future economic growth, purchasing dinars is not considered a reliable investment strategy. Alternative investment methods, such as investing directly in Iraqi stocks or companies, are generally recommended as more transparent and potentially more profitable approaches to engaging with Iraq's economic potential. The currency's fixed exchange rate and restricted trading options make it a particularly unattractive investment for international investors. As of September 2024, one U.S. dollar remains worth approximately 1,310.6 Iraqi dinars, underscoring the currency's current limited international value and speculative nature. (1932) (1932) (1932) (1939) (1932) (1932) (1932) (1939) Exchange rate See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Non-player_character#cite_note-Gamasutra-4] | [TOKENS: 1785]
Contents Non-player character A non-player character (NPC) is a character in a game that is not controlled by a player. The term originated in traditional tabletop role-playing games where it applies to characters controlled by the gamemaster, or referee, rather than by another player. In video games, this usually means a computer-controlled character that has a predetermined set of behaviors that potentially will impact gameplay, but will not necessarily be the product of true artificial intelligence. Role-playing games In traditional tabletop role-playing games (RPG) such as Dungeons & Dragons, an NPC is a character portrayed by the gamemaster (GM). While the player characters (PCs) form the narrative's protagonists, non-player characters can be thought of as the "supporting cast" or "extras" of a roleplaying narrative. Non-player characters populate the fictional world of the game, and can fill any role not occupied by a player character. Non-player characters might be allies, bystanders, or competitors to the PCs. NPCs can also be traders who trade currency for things such as equipment or gear. NPCs thus vary in their level of detail. Some may be only a brief description ("You see a man in a corner of the tavern"), while others may have complete game statistics and backstories. There is some debate about how much work a gamemaster should put into an important NPC's statistics; some players prefer to have every NPC completely defined with stats, skills, and gear, while others define only what is immediately necessary and fill in the rest as the game proceeds. There is also some debate regarding the importance of fully defined NPCs in any given role-playing game, but there is consensus that the more "real" the NPCs feel, the more fun players will have interacting with them in character. In some games and in some circumstances, a player who is without a player character can temporarily take control of an NPC. Reasons for this vary, but often arise from the player not maintaining a PC within the group and playing the NPC for a session or from the player's PC being unable to act for some time (for example, because the PC is injured or in another location). Although these characters are still designed and normally controlled by the gamemaster, when players are allowed to temporarily control these non-player characters, it gives them another perspective on the plot of the game. Some systems, such as Nobilis, encourage this in their rules.[citation needed] Many game systems have rules for characters sustaining positive allies in the form of NPC followers, hired hands, or other dependents stature to the PC (player character). Characters may sometimes help in the design, recruitment, or development of NPCs. In the Champions game (and related games using the Hero System), a character may have a DNPC, or "dependent non-player character". This is a character controlled by the GM, but for which the player character is responsible in some way, and who may be put in harm's way by the PC's choices. Video games The term "non-player character" is also used in video games to describe entities not under the direct control of a player. The term carries a connotation that the character is not hostile towards players; hostile characters are referred to as enemies, mobs, or creeps. NPC behavior in computer games is usually scripted and automatic, triggered by certain actions or dialogue with the player characters. In certain multiplayer games (Neverwinter Nights and Vampire: The Masquerade series, for example) a player that acts as the GM can "possess" both player and non-player characters, controlling their actions to further the storyline. More complex games, such as the aforementioned Neverwinter Nights, allow the player to customize the NPCs' behavior by modifying their default scripts or creating entirely new ones. In some online games, such as massively multiplayer online role-playing games, NPCs may be entirely unscripted, and are essentially regular character avatars controlled by employees of the game company. These "non-players" are often distinguished from player characters by avatar appearance or other visual designation, and often serve as in-game support for new players. In other cases, these "live" NPCs are virtual actors, playing regular characters that drive a continuing storyline (as in Myst Online: Uru Live). In earlier RPGs, NPCs only had monologues. This is typically represented by a dialogue box, floating text, cutscene, or other means of displaying the NPCs' speech or reaction to the player. [citation needed] NPC speeches of this kind are often designed to give an instant impression of the character of the speaker, providing character vignettes, but they may also advance the story or illuminate the world around the PC. Similar to this is the most common form of storytelling, non-branching dialogue, in which the means of displaying NPC speech are the same as above, but the player character or avatar responds to or initiates speech with NPCs. In addition to the purposes listed above, this enables the development of the player character. More advanced RPGs feature interactive dialogue, or branching dialogue (dialogue trees). An example are the games produced by Black Isle Studios and White Wolf, Inc.; every one of their games is multiple-choice roleplaying. When talking to an NPC, the player is presented with a list of dialogue options and may choose between them. Each choice may result in a different response from the NPC. These choices may affect the course of the game, as well as the conversation. At the least, they provide a reference point to the player of their character's relationship with the game world. Ultima is an example of a game series that has advanced from non-branching (Ultima III: Exodus and earlier) to branching dialogue (from Ultima IV: Quest of the Avatar and on). Other role-playing games with branching dialogues include Cosmic Soldier, Megami Tensei, Fire Emblem, Metal Max, Langrisser, SaGa, Ogre Battle, Chrono, Star Ocean, Sakura Wars, Mass Effect, Dragon Age, Radiant Historia, and several Dragon Quest and Final Fantasy games. Certain video game genres revolve almost entirely around interactions with non-player characters, including visual novels such as Ace Attorney and dating sims such as Tokimeki Memorial, usually featuring complex branching dialogues and often presenting the player's possible responses word-for-word as the player character would say them. Games revolving around relationship-building, including visual novels, dating sims such as Tokimeki Memorial, and some role-playing games such as Persona, often give choices that have a different number of associated "mood points" that influence a player character's relationship and future conversations with a non-player character. These games often feature a day-night cycle with a time scheduling system that provides context and relevance to character interactions, allowing players to choose when and if to interact with certain characters, which in turn influences their responses during later conversations. In 2023, Replica Studios unveiled its AI-developed NPCs for the Unreal Engine 5, in cooperation with OpenAI, which enable players to have an interactive conversation with unplayable characters. "NPC streaming"—livestreaming while mimicking the behaviors of an NPC—became popular on TikTok in 2023 and was largely popularized by livestreamer Pinkydoll. Other usage From around 2018, the term NPC became an insult, primarily online, to suggest that a person is unable to form thoughts or opinions of their own. This is sometimes illustrated with a grey-faced, expressionless version of the Wojak meme. Monetization NPC streaming is a type of livestream that allows users to participate in and shape the content they are viewing in real time. It has become widely popular as influencers and users of social media platforms such as TikTok utilize livestreams to act as non-player characters. "Viewers in NPC live streams take on the role of puppeteers, influencing the creator's next move." This phenomenon has been on the rise as viewers are actively involved in what they are watching, by purchasing digital "gifts" and sending them directly to the streamer. In return, the streamer will briefly mimic a character or act. This phenomenon has become a trend starting from July 2023, as influencers make profits from this new internet character. Pinkydoll, a TikTok influencer, gained 400,000 followers the same month that she started NPC streaming, while her livestreams began to earn her as much as $7,000 in a day. NPC streaming gives creators a new avenue to earn money online. Despite this, certain creators are quitting due to certain stigmas that come with the strategy. For example, a pioneer of the NPC trend, Malik Ambersley has been robbed, accosted by police, and gotten into fights due to the controversial nature of his act. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/History_of_the_Jews_in_Albania] | [TOKENS: 2565]
Contents History of the Jews in Albania The history of the Jews in Albania dates back about 2,000 years. According to historian Apostol Kotani (Albania and the Jews): "Jews may have first arrived in Albania as early as 70 C.E. as captives on Roman ships that washed up on the country's southern shores... descendants of these captives that would build the first synagogue in the southern port city of Sarandë in the fifth century...[but] Little is known about the Jewish community in the area until the 15th century." In the early 16th century, there were Jewish settlements in most of major cities of Albania such as Berat, Elbasan, Vlorë, Durrës and also they are reported as well in Kosovo region. These Jewish families were mainly of Sephardi origin and descendants of the Spanish and Portuguese Jews expelled from Iberia in the end of 15th century. Present-day Albanian Jews, predominantly of Romaniote[verification needed] and Sephardi origin, have in modern times only constituted a very small percentage of the population. During the Italian occupation which coincided with World War II, Albania was the only country in Nazi-occupied Europe which saw an increase in its Jewish population, because Albanian Jews were not turned over to the Germans thanks to an Albanian set of laws which is known as the Besa from the Kanun. During the communist dictatorship of Enver Hoxha which was named the People's Socialist Republic of Albania, all religions were banned in the country from February 1967 to 1990, including Judaism, in adherence to the doctrine of state atheism, and all foreign influences were also restricted. In the post-communist era, these policies have been abandoned and freedom of religion is permitted, although the number of practicing Jews in Albania is very small today, with many Jews having made aliyah to Israel. Ancient period First reports of Jews living in Albania date from the 70 CE, with it being thought that Jews first arrived on ships on the shores of southern Albania. The descendants of these refugees, according to Albanian historian Apostol Kotani, would found Albania's first synagogue in the fourth or fifth century, in the city of Onchesmos, modern day Saranda. At this city on the Adriatic coast, opposite the Greek island of Corfu, the remains of the synagogue dating from 3rd or 4th century (and enlarged in the 5th or 6th century) were excavated in 2003-2004. Albanian archaeologists apparently first discovered remains in the 1980s; the ban on religion by the then Communist regime prevented them from further exploring what was already thought to be a religious site The subject matter of the exceptional mosaics found at the site suggested a Jewish past, leading to a joint project between archaeologists from the Institute of Archaeology in Tirana and the Hebrew University Institute of Archaeology in Jerusalem. According to archaeologists Gideon Foerster and Ehud Netzer of the Hebrew University of Jerusalem, “five stages were identified in the history of the site. In the two early stages fine mosaic pavements (2nd to 4th century), probably part of a private home, preceded the later synagogue and church. In the third stage several rooms were added, the largest of these containing a mosaic pavement representing in its centre a menorah flanked by a shofar (ram’s horn) and an etrog (citron), all symbols associated with Jewish festivals. Mosaic pavements also decorated the other rooms. A large basilical hall added in the last two stages of the history of the site (5th to 6th century) represents the heyday of the Jewish community of Anchiasmon (Onchesmos), the ancient name of Saranda.” In sixth century Venosa (in Italy) there are references to Jews hailing from Onchesmos, which was also called Anchiasmos. The synagogue in Onchesmos was supplanted by a Christian church in the sixth century. Medieval and Ottoman period In the 12th century, Benjamin of Tudela visited the area and recorded the presence of Jews. In the thirteenth century, a Jewish community existed in Durrës and was employed in the salt trade. In Vlorë during 1426, the Ottomans supported the settlement of a Jewish community involved in mercantile activities. The Vlorë community underwent population growth in subsequent decades with Jews migrating from Corfu, Venetian ruled lands, Naples, France and the Iberian Peninsula. Approximately seventy Jewish families from Valencia settled in Vlorë between 1391 and 1492. Following the expulsion of Jews from Spain in 1492, the Ottoman state resettled additional Jewish exiles in Vlorë toward the end of the fifteenth century. Ottoman censuses from 1506 and 1520 recorded the Jewish population in Vlorë as consisting of 528 families. The Spanish expulsions also resulted in the formation of a Jewish community in Ottoman Berat that consisted of 25 families between 1519–1520. In 1501 following the establishment of Ottoman rule, the Durrës Jewish community experienced population growth, as did the Jewish population in Elbasan and Lezhë. Ottoman censuses for 1506 and 1520 recorded the Jewish population as consisting of 528 families and some 2,600 people in Vlorë. The Jews of Vlorë were involved in trade and the city imported items from Europe and exported spices, leather, cotton fabrics, velvets, brocades and mohair from the Ottoman cities of Istanbul and Bursa. Jews avoided settling in the city of Shkodra as unlike elsewhere in Albania, they perceived it to be the site of "fanatical" religious tensions, and the local rulers would not give them the same rights that Jews were receiving elsewhere in the Empire. It is thought that the beginning of the Jewish community in Elbasan was in 1501 but when Evliya Çelebi visited the city he reported that there were no Jews, nor any "Franks, Armenians, Serbs or Bulgars"; Edmond Malaj argues that Celebi was simply unaware of the local Jewish neighborhood which existed in the city, which had its own Jewish market. After the Battle of Lepanto (1571) and the deterioration of security along the Ottoman controlled Adriatic and Ionian coasts, the numbers of Jews within Vlorë decreased. The Berat and Vlorë Jewish communities took an active role in the welfare of other Jews such as managing to attain the release of war related captives present in Durrës in 1596. In 1673, Sabbatai Zevi was exiled by the sultan to the Albanian port of Ulqin, also called Ulcinj, now in Montenegro, dying there three years later. Followers of Sabbatai Zevi existed in Berat among Jews during the mid sixteenth century. The Jewish community of Yanina renewed the Jewish communities of Kavajë, Himarë, Delvinë, Vlorë, Elbasan and Berat in the nineteenth century. 1900–1939 During the Albanian nationalist revolts of 1911, Ottoman officials accused the Jewish community of colluding with and protecting Albanian nationalist rebels. Vlorë was the site of Albania's only synagogue until it was destroyed in the First World War. According to the Albanian census of 1930, there were only 204 Jews registered at that time in Albania. The official recognition of the Jewish community was granted on April 2, 1937, while at that time this community consisted in about 300 members. With the rise of Nazi Germany a number of German and Austrian Jews took refuge in Albania. Still in 1938 the Albanian Embassy in Berlin continued to issue visas to Jews, at a time when no other European country was willing to take them. One of the major Albanologists Norbert Jokl asked for Albanian citizenship, which was granted to him immediately; but this could not save him from concentration camps. World War II Albania had about 200 Jews at the beginning of the war. It subsequently became a safe haven for several hundred Jewish refugees from other countries. At the Wannsee Conference in 1942, Adolf Eichmann, planner of the mass murder of Jews across Europe, estimated the number of Jews in Albania that were to be killed at 200. Nevertheless, Jews in Albania remained protected by the local Christian and Muslim population and this protection continued even after the capitulation of Italy on September 8, 1943. At the end of the war, Albania had a population of 2,000 Jews. During the Second World War, Jews were concealed in the homes and basements of 60 families from the Muslim and Christian communities in Berat. Albania was the only European nation directly affected by the war to have come out of the Second World War with a higher Jewish population at the end of the war than at the start of the war. Communist era Throughout Albania's era of communist rule under the dictatorship of Enver Hoxha, the Jewish community was isolated from the Jewish world, but its isolation was not caused by specifically anti-Jewish measures. In order to forge a sustainable form of national unity as well as the new system of socialism, Hoxha banned confessional loyalties across the religious spectrum. In this manner, the fate of the Jewish community was inextricably linked to the fate of Albanian society as a whole. All religions were strictly banned in the country. After the fall of Communism in 1991, nearly all of the Jews of Albania emigrated. Because they were not victims of anti-Semitism, their primary reasons for leaving Albania were economic. Some 298 Albanian Jews emigrated to Israel (mostly settling in Ashdod and Karmiel) and about 30 others moved to the United States. About a dozen Jews, most of whom were married to non-Jews, chose to remain in Albania. Jews in present-day Albania Today, around 40 to 50 Jews are living in Albania, most of them are living in the capital, Tirana. An old synagogue was discovered in the city of Saranda and a new synagogue which is known as "Hechal Shlomo" started providing services to the Jewish community in Tirana in December 2010. A synagogue still exists in Vlorë, but it is no longer in use. Also in December 2010, Rabbi Joel Kaplan was inaugurated as the first chief rabbi of Albania by the former Albanian Prime Minister Sali Berisha and the Chief Rabbi of Israel Shlomo Amar. A Jewish Community Center which is named "Moshe Rabbenu" was also inaugurated in Tirana, in total disaccordance with the Jewish community who denies Kaplan's status of chief rabbi, partly because they were never consulted. Kaplan is an emissary or shaliach of the Chabad or Lubavitch sect of Hasidic Judaism. In the late 2010s, a Jewish history museum which is named the "Solomon Museum" was established in southern Berat and it contains exhibits about the Holocaust in Albania and the survival of Jews in the country during the war. In early July 2020, a Holocaust Memorial was unveiled in Tirana and it honors Albanians who safeguarded Jews from Nazi persecution during the Second World War. Among those who were in attendance were prime minister Edi Rama and the US and Israeli ambassadors. On 22 October 2020, the Albanian parliament adopted the International Holocaust Remembrance Alliance's Working Definition of Antisemitism to support international efforts in combating antisemitism, becoming the first Muslim-majority country to do so. Partnering with the Jewish Agency for Israel and the New York-based Combat Antisemitism Movement (CAM), Albania hosted the first meeting of The Balkans Forum Against Anti-Semitism on 28 October 2020, which was held online due to the COVID-19 pandemic. The participants included Prime minister Edi Rama, U.S. Secretary of State Michael R. Pompeo and other high ranking officials from the local region, wider Europe and the US. Prime Minister Edi Rama has been commended for his efforts to fight antisemitism. Notable Jews of Albanian origin Further reading See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Science_fiction_comedy] | [TOKENS: 457]
Contents Science fiction comedy Science fiction comedy (sci-fi comedy) or comic science fiction is a subgenre of science fiction or science fantasy that exploits the science fiction genre's conventions for comedic effect. The genre often mocks or satirizes standard science fiction conventions, concepts and tropes – such as alien invasion of Earth, interstellar travel, or futuristic technology. It can also satirize and criticize present-day society. An early example was the Pete Manx series by Henry Kuttner and Arthur K. Barnes (sometimes writing together and sometimes separately, under the house pen-name of Kelvin Kent). Published in Thrilling Wonder Stories in the late 1930s and early 1940s, the series featured a time-traveling carnival barker who uses his con-man abilities to get out of trouble. Two later series cemented Kuttner's reputation as one of the most popular early writers of comic science fiction: the Gallegher series (about a drunken inventor and his narcissistic robot) and the Hogben series (about a family of mutant hillbillies). The former appeared in Astounding Science Fiction in 1943 and 1948 and was collected in hardcover as Robots Have No Tails (Gnome, 1952), and the latter appeared in Thrilling Wonder Stories in the late 1940s. In the 1950s of the authors contributing to the sub-genre included: Alfred Bester, Harry Harrison, C. M. Kornbluth, Frederik Pohl, and Robert Sheckley. The Hitchhiker's Guide to the Galaxy is a science fiction comedy series written by Douglas Adams. Originally a radio comedy broadcast on BBC Radio 4 in 1978, it later morphed into other formats, including stage shows, novels, comic books, a 1981 TV series, a 1984 computer game, and 2005 feature film. A prominent series in British popular culture, The Hitchhiker's Guide to the Galaxy has become an international multi-media phenomenon; the novels are the most widely distributed, having been translated into more than 30 languages by 2005. Terry Pratchett's 1981 novel Strata also exemplifies the science fiction comedy genre. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Mars#cite_note-71] | [TOKENS: 11899]
Contents Mars Mars is the fourth planet from the Sun. It is also known as the "Red Planet", for its orange-red appearance. Mars is a desert-like rocky planet with a tenuous atmosphere that is primarily carbon dioxide (CO2). At the average surface level the atmospheric pressure is a few thousandths of Earth's, atmospheric temperature ranges from −153 to 20 °C (−243 to 68 °F), and cosmic radiation is high. Mars retains some water, in the ground as well as thinly in the atmosphere, forming cirrus clouds, fog, frost, larger polar regions of permafrost and ice caps (with seasonal CO2 snow), but no bodies of liquid surface water. Its surface gravity is roughly a third of Earth's or double that of the Moon. Its diameter, 6,779 km (4,212 mi), is about half the Earth's, or twice the Moon's, and its surface area is the size of all the dry land of Earth. Fine dust is prevalent across the surface and the atmosphere, being picked up and spread at the low Martian gravity even by the weak wind of the tenuous atmosphere. The terrain of Mars roughly follows a north-south divide, the Martian dichotomy, with the northern hemisphere mainly consisting of relatively flat, low lying plains, and the southern hemisphere of cratered highlands. Geologically, the planet is fairly active with marsquakes trembling underneath the ground, but also hosts many enormous volcanoes that are extinct (the tallest is Olympus Mons, 21.9 km or 13.6 mi tall), as well as one of the largest canyons in the Solar System (Valles Marineris, 4,000 km or 2,500 mi long). Mars has two natural satellites that are small and irregular in shape: Phobos and Deimos. With a significant axial tilt of 25 degrees, Mars experiences seasons, like Earth (which has an axial tilt of 23.5 degrees). A Martian solar year is equal to 1.88 Earth years (687 Earth days), a Martian solar day (sol) is equal to 24.6 hours. Mars formed along with the other planets approximately 4.5 billion years ago. During the martian Noachian period (4.5 to 3.5 billion years ago), its surface was marked by meteor impacts, valley formation, erosion, the possible presence of water oceans and the loss of its magnetosphere. The Hesperian period (beginning 3.5 billion years ago and ending 3.3–2.9 billion years ago) was dominated by widespread volcanic activity and flooding that carved immense outflow channels. The Amazonian period, which continues to the present, is the currently dominating and remaining influence on geological processes. Because of Mars's geological history, the possibility of past or present life on Mars remains an area of active scientific investigation, with some possible traces needing further examination. Being visible with the naked eye in Earth's sky as a red wandering star, Mars has been observed throughout history, acquiring diverse associations in different cultures. In 1963 the first flight to Mars took place with Mars 1, but communication was lost en route. The first successful flyby exploration of Mars was conducted in 1965 with Mariner 4. In 1971 Mariner 9 entered orbit around Mars, being the first spacecraft to orbit any body other than the Moon, Sun or Earth; following in the same year were the first uncontrolled impact (Mars 2) and first successful landing (Mars 3) on Mars. Probes have been active on Mars continuously since 1997. At times, more than ten probes have simultaneously operated in orbit or on the surface, more than at any other planet beyond Earth. Mars is an often proposed target for future crewed exploration missions, though no such mission is currently planned. Natural history Scientists have theorized that during the Solar System's formation, Mars was created as the result of a random process of run-away accretion of material from the protoplanetary disk that orbited the Sun. Mars has many distinctive chemical features caused by its position in the Solar System. Elements with comparatively low boiling points, such as chlorine, phosphorus, and sulfur, are much more common on Mars than on Earth; these elements were probably pushed outward by the young Sun's energetic solar wind. After the formation of the planets, the inner Solar System may have been subjected to the so-called Late Heavy Bombardment. About 60% of the surface of Mars shows a record of impacts from that era, whereas much of the remaining surface is probably underlain by immense impact basins caused by those events. However, more recent modeling has disputed the existence of the Late Heavy Bombardment. There is evidence of an enormous impact basin in the Northern Hemisphere of Mars, spanning 10,600 by 8,500 kilometres (6,600 by 5,300 mi), or roughly four times the size of the Moon's South Pole–Aitken basin, which would be the largest impact basin yet discovered if confirmed. It has been hypothesized that the basin was formed when Mars was struck by a Pluto-sized body about four billion years ago. The event, thought to be the cause of the Martian hemispheric dichotomy, created the smooth Borealis basin that covers 40% of the planet. A 2023 study shows evidence, based on the orbital inclination of Deimos (a small moon of Mars), that Mars may once have had a ring system 3.5 billion years to 4 billion years ago. This ring system may have been formed from a moon, 20 times more massive than Phobos, orbiting Mars billions of years ago; and Phobos would be a remnant of that ring. Epochs: The geological history of Mars can be split into many periods, but the following are the three primary periods: Geological activity is still taking place on Mars. The Athabasca Valles is home to sheet-like lava flows created about 200 million years ago. Water flows in the grabens called the Cerberus Fossae occurred less than 20 million years ago, indicating equally recent volcanic intrusions. The Mars Reconnaissance Orbiter has captured images of avalanches. Physical characteristics Mars is approximately half the diameter of Earth or twice that of the Moon, with a surface area only slightly less than the total area of Earth's dry land. Mars is less dense than Earth, having about 15% of Earth's volume and 11% of Earth's mass, resulting in about 38% of Earth's surface gravity. Mars is the only presently known example of a desert planet, a rocky planet with a surface akin to that of Earth's deserts. The red-orange appearance of the Martian surface is caused by iron(III) oxide (nanophase Fe2O3) and the iron(III) oxide-hydroxide mineral goethite. It can look like butterscotch; other common surface colors include golden, brown, tan, and greenish, depending on the minerals present. Like Earth, Mars is differentiated into a dense metallic core overlaid by less dense rocky layers. The outermost layer is the crust, which is on average about 42–56 kilometres (26–35 mi) thick, with a minimum thickness of 6 kilometres (3.7 mi) in Isidis Planitia, and a maximum thickness of 117 kilometres (73 mi) in the southern Tharsis plateau. For comparison, Earth's crust averages 27.3 ± 4.8 km in thickness. The most abundant elements in the Martian crust are silicon, oxygen, iron, magnesium, aluminum, calcium, and potassium. Mars is confirmed to be seismically active; in 2019, it was reported that InSight had detected and recorded over 450 marsquakes and related events. Beneath the crust is a silicate mantle responsible for many of the tectonic and volcanic features on the planet's surface. The upper Martian mantle is a low-velocity zone, where the velocity of seismic waves is lower than surrounding depth intervals. The mantle appears to be rigid down to the depth of about 250 km, giving Mars a very thick lithosphere compared to Earth. Below this the mantle gradually becomes more ductile, and the seismic wave velocity starts to grow again. The Martian mantle does not appear to have a thermally insulating layer analogous to Earth's lower mantle; instead, below 1050 km in depth, it becomes mineralogically similar to Earth's transition zone. At the bottom of the mantle lies a basal liquid silicate layer approximately 150–180 km thick. The Martian mantle appears to be highly heterogenous, with dense fragments up to 4 km across, likely injected deep into the planet by colossal impacts ~4.5 billion years ago; high-frequency waves from eight marsquakes slowed as they passed these localized regions, and modeling indicates the heterogeneities are compositionally distinct debris preserved because Mars lacks plate tectonics and has a sluggishly convecting interior that prevents complete homogenization. Mars's iron and nickel core is at least partially molten, and may have a solid inner core. It is around half of Mars's radius, approximately 1650–1675 km, and is enriched in light elements such as sulfur, oxygen, carbon, and hydrogen. The temperature of the core is estimated to be 2000–2400 K, compared to 5400–6230 K for Earth's solid inner core. In 2025, based on data from the InSight lander, a group of researchers reported the detection of a solid inner core 613 kilometres (381 mi) ± 67 kilometres (42 mi) in radius. Mars is a terrestrial planet with a surface that consists of minerals containing silicon and oxygen, metals, and other elements that typically make up rock. The Martian surface is primarily composed of tholeiitic basalt, although parts are more silica-rich than typical basalt and may be similar to andesitic rocks on Earth, or silica glass. Regions of low albedo suggest concentrations of plagioclase feldspar, with northern low albedo regions displaying higher than normal concentrations of sheet silicates and high-silicon glass. Parts of the southern highlands include detectable amounts of high-calcium pyroxenes. Localized concentrations of hematite and olivine have been found. Much of the surface is deeply covered by finely grained iron(III) oxide dust. The Phoenix lander returned data showing Martian soil to be slightly alkaline and containing elements such as magnesium, sodium, potassium and chlorine. These nutrients are found in soils on Earth, and are necessary for plant growth. Experiments performed by the lander showed that the Martian soil has a basic pH of 7.7, and contains 0.6% perchlorate by weight, concentrations that are toxic to humans. Streaks are common across Mars and new ones appear frequently on steep slopes of craters, troughs, and valleys. The streaks are dark at first and get lighter with age. The streaks can start in a tiny area, then spread out for hundreds of metres. They have been seen to follow the edges of boulders and other obstacles in their path. The commonly accepted hypotheses include that they are dark underlying layers of soil revealed after avalanches of bright dust or dust devils. Several other explanations have been put forward, including those that involve water or even the growth of organisms. Environmental radiation levels on the surface are on average 0.64 millisieverts of radiation per day, and significantly less than the radiation of 1.84 millisieverts per day or 22 millirads per day during the flight to and from Mars. For comparison the radiation levels in low Earth orbit, where Earth's space stations orbit, are around 0.5 millisieverts of radiation per day. Hellas Planitia has the lowest surface radiation at about 0.342 millisieverts per day, featuring lava tubes southwest of Hadriacus Mons with potentially levels as low as 0.064 millisieverts per day, comparable to radiation levels during flights on Earth. Although Mars has no evidence of a structured global magnetic field, observations show that parts of the planet's crust have been magnetized, suggesting that alternating polarity reversals of its dipole field have occurred in the past. This paleomagnetism of magnetically susceptible minerals is similar to the alternating bands found on Earth's ocean floors. One hypothesis, published in 1999 and re-examined in October 2005 (with the help of the Mars Global Surveyor), is that these bands suggest plate tectonic activity on Mars four billion years ago, before the planetary dynamo ceased to function and the planet's magnetic field faded. Geography and features Although better remembered for mapping the Moon, Johann Heinrich von Mädler and Wilhelm Beer were the first areographers. They began by establishing that most of Mars's surface features were permanent and by more precisely determining the planet's rotation period. In 1840, Mädler combined ten years of observations and drew the first map of Mars. Features on Mars are named from a variety of sources. Albedo features are named for classical mythology. Craters larger than roughly 50 km are named for deceased scientists and writers and others who have contributed to the study of Mars. Smaller craters are named for towns and villages of the world with populations of less than 100,000. Large valleys are named for the word "Mars" or "star" in various languages; smaller valleys are named for rivers. Large albedo features retain many of the older names but are often updated to reflect new knowledge of the nature of the features. For example, Nix Olympica (the snows of Olympus) has become Olympus Mons (Mount Olympus). The surface of Mars as seen from Earth is divided into two kinds of areas, with differing albedo. The paler plains covered with dust and sand rich in reddish iron oxides were once thought of as Martian "continents" and given names like Arabia Terra (land of Arabia) or Amazonis Planitia (Amazonian plain). The dark features were thought to be seas, hence their names Mare Erythraeum, Mare Sirenum and Aurorae Sinus. The largest dark feature seen from Earth is Syrtis Major Planum. The permanent northern polar ice cap is named Planum Boreum. The southern cap is called Planum Australe. Mars's equator is defined by its rotation, but the location of its Prime Meridian was specified, as was Earth's (at Greenwich), by choice of an arbitrary point; Mädler and Beer selected a line for their first maps of Mars in 1830. After the spacecraft Mariner 9 provided extensive imagery of Mars in 1972, a small crater (later called Airy-0), located in the Sinus Meridiani ("Middle Bay" or "Meridian Bay"), was chosen by Merton E. Davies, Harold Masursky, and Gérard de Vaucouleurs for the definition of 0.0° longitude to coincide with the original selection. Because Mars has no oceans, and hence no "sea level", a zero-elevation surface had to be selected as a reference level; this is called the areoid of Mars, analogous to the terrestrial geoid. Zero altitude was defined by the height at which there is 610.5 Pa (6.105 mbar) of atmospheric pressure. This pressure corresponds to the triple point of water, and it is about 0.6% of the sea level surface pressure on Earth (0.006 atm). For mapping purposes, the United States Geological Survey divides the surface of Mars into thirty cartographic quadrangles, each named for a classical albedo feature it contains. In April 2023, The New York Times reported an updated global map of Mars based on images from the Hope spacecraft. A related, but much more detailed, global Mars map was released by NASA on 16 April 2023. The vast upland region Tharsis contains several massive volcanoes, which include the shield volcano Olympus Mons. The edifice is over 600 km (370 mi) wide. Because the mountain is so large, with complex structure at its edges, giving a definite height to it is difficult. Its local relief, from the foot of the cliffs which form its northwest margin to its peak, is over 21 km (13 mi), a little over twice the height of Mauna Kea as measured from its base on the ocean floor. The total elevation change from the plains of Amazonis Planitia, over 1,000 km (620 mi) to the northwest, to the summit approaches 26 km (16 mi), roughly three times the height of Mount Everest, which in comparison stands at just over 8.8 kilometres (5.5 mi). Consequently, Olympus Mons is either the tallest or second-tallest mountain in the Solar System; the only known mountain which might be taller is the Rheasilvia peak on the asteroid Vesta, at 20–25 km (12–16 mi). The dichotomy of Martian topography is striking: northern plains flattened by lava flows contrast with the southern highlands, pitted and cratered by ancient impacts. It is possible that, four billion years ago, the Northern Hemisphere of Mars was struck by an object one-tenth to two-thirds the size of Earth's Moon. If this is the case, the Northern Hemisphere of Mars would be the site of an impact crater 10,600 by 8,500 kilometres (6,600 by 5,300 mi) in size, or roughly the area of Europe, Asia, and Australia combined, surpassing Utopia Planitia and the Moon's South Pole–Aitken basin as the largest impact crater in the Solar System. Mars is scarred by 43,000 impact craters with a diameter of 5 kilometres (3.1 mi) or greater. The largest exposed crater is Hellas, which is 2,300 kilometres (1,400 mi) wide and 7,000 metres (23,000 ft) deep, and is a light albedo feature clearly visible from Earth. There are other notable impact features, such as Argyre, which is around 1,800 kilometres (1,100 mi) in diameter, and Isidis, which is around 1,500 kilometres (930 mi) in diameter. Due to the smaller mass and size of Mars, the probability of an object colliding with the planet is about half that of Earth. Mars is located closer to the asteroid belt, so it has an increased chance of being struck by materials from that source. Mars is more likely to be struck by short-period comets, i.e., those that lie within the orbit of Jupiter. Martian craters can[discuss] have a morphology that suggests the ground became wet after the meteor impact. The large canyon, Valles Marineris (Latin for 'Mariner Valleys, also known as Agathodaemon in the old canal maps), has a length of 4,000 kilometres (2,500 mi) and a depth of up to 7 kilometres (4.3 mi). The length of Valles Marineris is equivalent to the length of Europe and extends across one-fifth the circumference of Mars. By comparison, the Grand Canyon on Earth is only 446 kilometres (277 mi) long and nearly 2 kilometres (1.2 mi) deep. Valles Marineris was formed due to the swelling of the Tharsis area, which caused the crust in the area of Valles Marineris to collapse. In 2012, it was proposed that Valles Marineris is not just a graben, but a plate boundary where 150 kilometres (93 mi) of transverse motion has occurred, making Mars a planet with possibly a two-tectonic plate arrangement. Images from the Thermal Emission Imaging System (THEMIS) aboard NASA's Mars Odyssey orbiter have revealed seven possible cave entrances on the flanks of the volcano Arsia Mons. The caves, named after loved ones of their discoverers, are collectively known as the "seven sisters". Cave entrances measure from 100 to 252 metres (328 to 827 ft) wide and they are estimated to be at least 73 to 96 metres (240 to 315 ft) deep. Because light does not reach the floor of most of the caves, they may extend much deeper than these lower estimates and widen below the surface. "Dena" is the only exception; its floor is visible and was measured to be 130 metres (430 ft) deep. The interiors of these caverns may be protected from micrometeoroids, UV radiation, solar flares and high energy particles that bombard the planet's surface. Martian geysers (or CO2 jets) are putative sites of small gas and dust eruptions that occur in the south polar region of Mars during the spring thaw. "Dark dune spots" and "spiders" – or araneiforms – are the two most visible types of features ascribed to these eruptions. Similarly sized dust will settle from the thinner Martian atmosphere sooner than it would on Earth. For example, the dust suspended by the 2001 global dust storms on Mars only remained in the Martian atmosphere for 0.6 years, while the dust from Mount Pinatubo took about two years to settle. However, under current Martian conditions, the mass movements involved are generally much smaller than on Earth. Even the 2001 global dust storms on Mars moved only the equivalent of a very thin dust layer – about 3 μm thick if deposited with uniform thickness between 58° north and south of the equator. Dust deposition at the two rover sites has proceeded at a rate of about the thickness of a grain every 100 sols. Atmosphere Mars lost its magnetosphere 4 billion years ago, possibly because of numerous asteroid strikes, so the solar wind interacts directly with the Martian ionosphere, lowering the atmospheric density by stripping away atoms from the outer layer. Both Mars Global Surveyor and Mars Express have detected ionized atmospheric particles trailing off into space behind Mars, and this atmospheric loss is being studied by the MAVEN orbiter. Compared to Earth, the atmosphere of Mars is quite rarefied. Atmospheric pressure on the surface today ranges from a low of 30 Pa (0.0044 psi) on Olympus Mons to over 1,155 Pa (0.1675 psi) in Hellas Planitia, with a mean pressure at the surface level of 600 Pa (0.087 psi). The highest atmospheric density on Mars is equal to that found 35 kilometres (22 mi) above Earth's surface. The resulting mean surface pressure is only 0.6% of Earth's 101.3 kPa (14.69 psi). The scale height of the atmosphere is about 10.8 kilometres (6.7 mi), which is higher than Earth's 6 kilometres (3.7 mi), because the surface gravity of Mars is only about 38% of Earth's. The atmosphere of Mars consists of about 96% carbon dioxide, 1.93% argon and 1.89% nitrogen along with traces of oxygen and water. The atmosphere is quite dusty, containing particulates about 1.5 μm in diameter which give the Martian sky a tawny color when seen from the surface. It may take on a pink hue due to iron oxide particles suspended in it. Despite repeated detections of methane on Mars, there is no scientific consensus as to its origin. One suggestion is that methane exists on Mars and that its concentration fluctuates seasonally. The existence of methane could be produced by non-biological process such as serpentinization involving water, carbon dioxide, and the mineral olivine, which is known to be common on Mars, or by Martian life. Compared to Earth, its higher concentration of atmospheric CO2 and lower surface pressure may be why sound is attenuated more on Mars, where natural sources are rare apart from the wind. Using acoustic recordings collected by the Perseverance rover, researchers concluded that the speed of sound there is approximately 240 m/s for frequencies below 240 Hz, and 250 m/s for those above. Auroras have been detected on Mars. Because Mars lacks a global magnetic field, the types and distribution of auroras there differ from those on Earth; rather than being mostly restricted to polar regions as is the case on Earth, a Martian aurora can encompass the planet. In September 2017, NASA reported radiation levels on the surface of the planet Mars were temporarily doubled, and were associated with an aurora 25 times brighter than any observed earlier, due to a massive, and unexpected, solar storm in the middle of the month. Mars has seasons, alternating between its northern and southern hemispheres, similar to on Earth. Additionally the orbit of Mars has, compared to Earth's, a large eccentricity and approaches perihelion when it is summer in its southern hemisphere and winter in its northern, and aphelion when it is winter in its southern hemisphere and summer in its northern. As a result, the seasons in its southern hemisphere are more extreme and the seasons in its northern are milder than would otherwise be the case. The summer temperatures in the south can be warmer than the equivalent summer temperatures in the north by up to 30 °C (54 °F). Martian surface temperatures vary from lows of about −110 °C (−166 °F) to highs of up to 35 °C (95 °F) in equatorial summer. The wide range in temperatures is due to the thin atmosphere which cannot store much solar heat, the low atmospheric pressure (about 1% that of the atmosphere of Earth), and the low thermal inertia of Martian soil. The planet is 1.52 times as far from the Sun as Earth, resulting in just 43% of the amount of sunlight. Mars has the largest dust storms in the Solar System, reaching speeds of over 160 km/h (100 mph). These can vary from a storm over a small area, to gigantic storms that cover the entire planet. They tend to occur when Mars is closest to the Sun, and have been shown to increase global temperature. Seasons also produce dry ice covering polar ice caps. Hydrology While Mars contains water in larger amounts, most of it is dust covered water ice at the Martian polar ice caps. The volume of water ice in the south polar ice cap, if melted, would be enough to cover most of the surface of the planet with a depth of 11 metres (36 ft). Water in its liquid form cannot persist on the surface due to Mars's low atmospheric pressure, which is less than 1% that of Earth. Only at the lowest of elevations are the pressure and temperature high enough for liquid water to exist for short periods. Although little water is present in the atmosphere, there is enough to produce clouds of water ice and different cases of snow and frost, often mixed with snow of carbon dioxide dry ice. Landforms visible on Mars strongly suggest that liquid water has existed on the planet's surface. Huge linear swathes of scoured ground, known as outflow channels, cut across the surface in about 25 places. These are thought to be a record of erosion caused by the catastrophic release of water from subsurface aquifers, though some of these structures have been hypothesized to result from the action of glaciers or lava. One of the larger examples, Ma'adim Vallis, is 700 kilometres (430 mi) long, much greater than the Grand Canyon, with a width of 20 kilometres (12 mi) and a depth of 2 kilometres (1.2 mi) in places. It is thought to have been carved by flowing water early in Mars's history. The youngest of these channels is thought to have formed only a few million years ago. Elsewhere, particularly on the oldest areas of the Martian surface, finer-scale, dendritic networks of valleys are spread across significant proportions of the landscape. Features of these valleys and their distribution strongly imply that they were carved by runoff resulting from precipitation in early Mars history. Subsurface water flow and groundwater sapping may play important subsidiary roles in some networks, but precipitation was probably the root cause of the incision in almost all cases. Along craters and canyon walls, there are thousands of features that appear similar to terrestrial gullies. The gullies tend to be in the highlands of the Southern Hemisphere and face the Equator; all are poleward of 30° latitude. A number of authors have suggested that their formation process involves liquid water, probably from melting ice, although others have argued for formation mechanisms involving carbon dioxide frost or the movement of dry dust. No partially degraded gullies have formed by weathering and no superimposed impact craters have been observed, indicating that these are young features, possibly still active. Other geological features, such as deltas and alluvial fans preserved in craters, are further evidence for warmer, wetter conditions at an interval or intervals in earlier Mars history. Such conditions necessarily require the widespread presence of crater lakes across a large proportion of the surface, for which there is independent mineralogical, sedimentological and geomorphological evidence. Further evidence that liquid water once existed on the surface of Mars comes from the detection of specific minerals such as hematite and goethite, both of which sometimes form in the presence of water. The chemical signature of water vapor on Mars was first unequivocally demonstrated in 1963 by spectroscopy using an Earth-based telescope. In 2004, Opportunity detected the mineral jarosite. This forms only in the presence of acidic water, showing that water once existed on Mars. The Spirit rover found concentrated deposits of silica in 2007 that indicated wet conditions in the past, and in December 2011, the mineral gypsum, which also forms in the presence of water, was found on the surface by NASA's Mars rover Opportunity. It is estimated that the amount of water in the upper mantle of Mars, represented by hydroxyl ions contained within Martian minerals, is equal to or greater than that of Earth at 50–300 parts per million of water, which is enough to cover the entire planet to a depth of 200–1,000 metres (660–3,280 ft). On 18 March 2013, NASA reported evidence from instruments on the Curiosity rover of mineral hydration, likely hydrated calcium sulfate, in several rock samples including the broken fragments of "Tintina" rock and "Sutton Inlier" rock as well as in veins and nodules in other rocks like "Knorr" rock and "Wernicke" rock. Analysis using the rover's DAN instrument provided evidence of subsurface water, amounting to as much as 4% water content, down to a depth of 60 centimetres (24 in), during the rover's traverse from the Bradbury Landing site to the Yellowknife Bay area in the Glenelg terrain. In September 2015, NASA announced that they had found strong evidence of hydrated brine flows in recurring slope lineae, based on spectrometer readings of the darkened areas of slopes. These streaks flow downhill in Martian summer, when the temperature is above −23 °C, and freeze at lower temperatures. These observations supported earlier hypotheses, based on timing of formation and their rate of growth, that these dark streaks resulted from water flowing just below the surface. However, later work suggested that the lineae may be dry, granular flows instead, with at most a limited role for water in initiating the process. A definitive conclusion about the presence, extent, and role of liquid water on the Martian surface remains elusive. Researchers suspect much of the low northern plains of the planet were covered with an ocean hundreds of meters deep, though this theory remains controversial. In March 2015, scientists stated that such an ocean might have been the size of Earth's Arctic Ocean. This finding was derived from the ratio of protium to deuterium in the modern Martian atmosphere compared to that ratio on Earth. The amount of Martian deuterium (D/H = 9.3 ± 1.7 10−4) is five to seven times the amount on Earth (D/H = 1.56 10−4), suggesting that ancient Mars had significantly higher levels of water. Results from the Curiosity rover had previously found a high ratio of deuterium in Gale Crater, though not significantly high enough to suggest the former presence of an ocean. Other scientists caution that these results have not been confirmed, and point out that Martian climate models have not yet shown that the planet was warm enough in the past to support bodies of liquid water. Near the northern polar cap is the 81.4 kilometres (50.6 mi) wide Korolev Crater, which the Mars Express orbiter found to be filled with approximately 2,200 cubic kilometres (530 cu mi) of water ice. In November 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region. The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior (which is 12,100 cubic kilometers). During observations from 2018 through 2021, the ExoMars Trace Gas Orbiter spotted indications of water, probably subsurface ice, in the Valles Marineris canyon system. Orbital motion Mars's average distance from the Sun is roughly 230 million km (143 million mi), and its orbital period is 687 (Earth) days. The solar day (or sol) on Mars is only slightly longer than an Earth day: 24 hours, 39 minutes, and 35.244 seconds. A Martian year is equal to 1.8809 Earth years, or 1 year, 320 days, and 18.2 hours. The gravitational potential difference and thus the delta-v needed to transfer between Mars and Earth is the second lowest for Earth. The axial tilt of Mars is 25.19° relative to its orbital plane, which is similar to the axial tilt of Earth. As a result, Mars has seasons like Earth, though on Mars they are nearly twice as long because its orbital period is that much longer. In the present day, the orientation of the north pole of Mars is close to the star Deneb. Mars has a relatively pronounced orbital eccentricity of about 0.09; of the seven other planets in the Solar System, only Mercury has a larger orbital eccentricity. It is known that in the past, Mars has had a much more circular orbit. At one point, 1.35 million Earth years ago, Mars had an eccentricity of roughly 0.002, much less than that of Earth today. Mars's cycle of eccentricity is 96,000 Earth years compared to Earth's cycle of 100,000 years. Mars has its closest approach to Earth (opposition) in a synodic period of 779.94 days. It should not be confused with Mars conjunction, where the Earth and Mars are at opposite sides of the Solar System and form a straight line crossing the Sun. The average time between the successive oppositions of Mars, its synodic period, is 780 days; but the number of days between successive oppositions can range from 764 to 812. The distance at close approach varies between about 54 and 103 million km (34 and 64 million mi) due to the planets' elliptical orbits, which causes comparable variation in angular size. At their furthest Mars and Earth can be as far as 401 million km (249 million mi) apart. Mars comes into opposition from Earth every 2.1 years. The planets come into opposition near Mars's perihelion in 2003, 2018 and 2035, with the 2020 and 2033 events being particularly close to perihelic opposition. The mean apparent magnitude of Mars is +0.71 with a standard deviation of 1.05. Because the orbit of Mars is eccentric, the magnitude at opposition from the Sun can range from about −3.0 to −1.4. The minimum brightness is magnitude +1.86 when the planet is near aphelion and in conjunction with the Sun. At its brightest, Mars (along with Jupiter) is second only to Venus in apparent brightness. Mars usually appears distinctly yellow, orange, or red. When farthest away from Earth, it is more than seven times farther away than when it is closest. Mars is usually close enough for particularly good viewing once or twice at 15-year or 17-year intervals. Optical ground-based telescopes are typically limited to resolving features about 300 kilometres (190 mi) across when Earth and Mars are closest because of Earth's atmosphere. As Mars approaches opposition, it begins a period of retrograde motion, which means it will appear to move backwards in a looping curve with respect to the background stars. This retrograde motion lasts for about 72 days, and Mars reaches its peak apparent brightness in the middle of this interval. Moons Mars has two relatively small (compared to Earth's) natural moons, Phobos (about 22 km (14 mi) in diameter) and Deimos (about 12 km (7.5 mi) in diameter), which orbit at 9,376 km (5,826 mi) and 23,460 km (14,580 mi) around the planet. The origin of both moons is unclear, although a popular theory states that they were asteroids captured into Martian orbit. Both satellites were discovered in 1877 by Asaph Hall and were named after the characters Phobos (the deity of panic and fear) and Deimos (the deity of terror and dread), twins from Greek mythology who accompanied their father Ares, god of war, into battle. Mars was the Roman equivalent to Ares. In modern Greek, the planet retains its ancient name Ares (Aris: Άρης). From the surface of Mars, the motions of Phobos and Deimos appear different from that of the Earth's satellite, the Moon. Phobos rises in the west, sets in the east, and rises again in just 11 hours. Deimos, being only just outside synchronous orbit – where the orbital period would match the planet's period of rotation – rises as expected in the east, but slowly. Because the orbit of Phobos is below a synchronous altitude, tidal forces from Mars are gradually lowering its orbit. In about 50 million years, it could either crash into Mars's surface or break up into a ring structure around the planet. The origin of the two satellites is not well understood. Their low albedo and carbonaceous chondrite composition have been regarded as similar to asteroids, supporting a capture theory. The unstable orbit of Phobos would seem to point toward a relatively recent capture. But both have circular orbits near the equator, which is unusual for captured objects, and the required capture dynamics are complex. Accretion early in the history of Mars is plausible, but would not account for a composition resembling asteroids rather than Mars itself, if that is confirmed. Mars may have yet-undiscovered moons, smaller than 50 to 100 metres (160 to 330 ft) in diameter, and a dust ring is predicted to exist between Phobos and Deimos. A third possibility for their origin as satellites of Mars is the involvement of a third body or a type of impact disruption. More-recent lines of evidence for Phobos having a highly porous interior, and suggesting a composition containing mainly phyllosilicates and other minerals known from Mars, point toward an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's satellite. Although the visible and near-infrared (VNIR) spectra of the moons of Mars resemble those of outer-belt asteroids, the thermal infrared spectra of Phobos are reported to be inconsistent with chondrites of any class. It is also possible that Phobos and Deimos were fragments of an older moon, formed by debris from a large impact on Mars, and then destroyed by a more recent impact upon the satellite. More recently, a study conducted by a team of researchers from multiple countries suggests that a lost moon, at least fifteen times the size of Phobos, may have existed in the past. By analyzing rocks which point to tidal processes on the planet, it is possible that these tides may have been regulated by a past moon. Human observations and exploration The history of observations of Mars is marked by oppositions of Mars when the planet is closest to Earth and hence is most easily visible, which occur every couple of years. Even more notable are the perihelic oppositions of Mars, which are distinguished because Mars is close to perihelion, making it even closer to Earth. The ancient Sumerians named Mars Nergal, the god of war and plague. During Sumerian times, Nergal was a minor deity of little significance, but, during later times, his main cult center was the city of Nineveh. In Mesopotamian texts, Mars is referred to as the "star of judgement of the fate of the dead". The existence of Mars as a wandering object in the night sky was also recorded by the ancient Egyptian astronomers and, by 1534 BCE, they were familiar with the retrograde motion of the planet. By the period of the Neo-Babylonian Empire, the Babylonian astronomers were making regular records of the positions of the planets and systematic observations of their behavior. For Mars, they knew that the planet made 37 synodic periods, or 42 circuits of the zodiac, every 79 years. They invented arithmetic methods for making minor corrections to the predicted positions of the planets. In Ancient Greece, the planet was known as Πυρόεις. Commonly, the Greek name for the planet now referred to as Mars, was Ares. It was the Romans who named the planet Mars, for their god of war, often represented by the sword and shield of the planet's namesake. In the fourth century BCE, Aristotle noted that Mars disappeared behind the Moon during an occultation, indicating that the planet was farther away. Ptolemy, a Greek living in Alexandria, attempted to address the problem of the orbital motion of Mars. Ptolemy's model and his collective work on astronomy was presented in the multi-volume collection later called the Almagest (from the Arabic for "greatest"), which became the authoritative treatise on Western astronomy for the next fourteen centuries. Literature from ancient China confirms that Mars was known by Chinese astronomers by no later than the fourth century BCE. In the East Asian cultures, Mars is traditionally referred to as the "fire star" (火星) based on the Wuxing system. In 1609 Johannes Kepler published a 10 year study of Martian orbit, using the diurnal parallax of Mars, measured by Tycho Brahe, to make a preliminary calculation of the relative distance to the planet. From Brahe's observations of Mars, Kepler deduced that the planet orbited the Sun not in a circle, but in an ellipse. Moreover, Kepler showed that Mars sped up as it approached the Sun and slowed down as it moved farther away, in a manner that later physicists would explain as a consequence of the conservation of angular momentum.: 433–437 In 1610 the first use of a telescope for astronomical observation, including Mars, was performed by Italian astronomer Galileo Galilei. With the telescope the diurnal parallax of Mars was again measured in an effort to determine the Sun-Earth distance. This was first performed by Giovanni Domenico Cassini in 1672. The early parallax measurements were hampered by the quality of the instruments. The only occultation of Mars by Venus observed was that of 13 October 1590, seen by Michael Maestlin at Heidelberg. By the 19th century, the resolution of telescopes reached a level sufficient for surface features to be identified. On 5 September 1877, a perihelic opposition to Mars occurred. The Italian astronomer Giovanni Schiaparelli used a 22-centimetre (8.7 in) telescope in Milan to help produce the first detailed map of Mars. These maps notably contained features he called canali, which, with the possible exception of the natural canyon Valles Marineris, were later shown to be an optical illusion. These canali were supposedly long, straight lines on the surface of Mars, to which he gave names of famous rivers on Earth. His term, which means "channels" or "grooves", was popularly mistranslated in English as "canals". Influenced by the observations, the orientalist Percival Lowell founded an observatory which had 30- and 45-centimetre (12- and 18-in) telescopes. The observatory was used for the exploration of Mars during the last good opportunity in 1894, and the following less favorable oppositions. He published several books on Mars and life on the planet, which had a great influence on the public. The canali were independently observed by other astronomers, like Henri Joseph Perrotin and Louis Thollon in Nice, using one of the largest telescopes of that time. The seasonal changes (consisting of the diminishing of the polar caps and the dark areas formed during Martian summers) in combination with the canals led to speculation about life on Mars, and it was a long-held belief that Mars contained vast seas and vegetation. As bigger telescopes were used, fewer long, straight canali were observed. During observations in 1909 by Antoniadi with an 84-centimetre (33 in) telescope, irregular patterns were observed, but no canali were seen. The first spacecraft from Earth to visit Mars was Mars 1 of the Soviet Union, which flew by in 1963, but contact was lost en route. NASA's Mariner 4 followed and became the first spacecraft to successfully transmit from Mars; launched on 28 November 1964, it made its closest approach to the planet on 15 July 1965. Mariner 4 detected the weak Martian radiation belt, measured at about 0.1% that of Earth, and captured the first images of another planet from deep space. Once spacecraft visited the planet during the 1960s and 1970s, many previous concepts of Mars were radically broken. After the results of the Viking life-detection experiments, the hypothesis of a dead planet was generally accepted. The data from Mariner 9 and Viking allowed better maps of Mars to be made. Until 1997 and after Viking 1 shut down in 1982, Mars was only visited by three unsuccessful probes, two flying past without contact (Phobos 1, 1988; Mars Observer, 1993), and one (Phobos 2 1989) malfunctioning in orbit before reaching its destination Phobos. In 1997 Mars Pathfinder became the first successful rover mission beyond the Moon and started together with Mars Global Surveyor (operated until late 2006) an uninterrupted active robotic presence at Mars that has lasted until today. It produced complete, extremely detailed maps of the Martian topography, magnetic field and surface minerals. Starting with these missions a range of new improved crewless spacecraft, including orbiters, landers, and rovers, have been sent to Mars, with successful missions by the NASA (United States), Jaxa (Japan), ESA, United Kingdom, ISRO (India), Roscosmos (Russia), the United Arab Emirates, and CNSA (China) to study the planet's surface, climate, and geology, uncovering the different elements of the history and dynamic of the hydrosphere of Mars and possible traces of ancient life. As of 2023[update], Mars is host to ten functioning spacecraft. Eight are in orbit: 2001 Mars Odyssey, Mars Express, Mars Reconnaissance Orbiter, MAVEN, ExoMars Trace Gas Orbiter, the Hope orbiter, and the Tianwen-1 orbiter. Another two are on the surface: the Mars Science Laboratory Curiosity rover and the Perseverance rover. Collected maps are available online at websites including Google Mars. NASA provides two online tools: Mars Trek, which provides visualizations of the planet using data from 50 years of exploration, and Experience Curiosity, which simulates traveling on Mars in 3-D with Curiosity. Planned missions to Mars include: As of February 2024[update], debris from these types of missions has reached over seven tons. Most of it consists of crashed and inactive spacecraft as well as discarded components. In April 2024, NASA selected several companies to begin studies on providing commercial services to further enable robotic science on Mars. Key areas include establishing telecommunications, payload delivery and surface imaging. Habitability and habitation During the late 19th century, it was widely accepted in the astronomical community that Mars had life-supporting qualities, including the presence of oxygen and water. However, in 1894 W. W. Campbell at Lick Observatory observed the planet and found that "if water vapor or oxygen occur in the atmosphere of Mars it is in quantities too small to be detected by spectroscopes then available". That observation contradicted many of the measurements of the time and was not widely accepted. Campbell and V. M. Slipher repeated the study in 1909 using better instruments, but with the same results. It was not until the findings were confirmed by W. S. Adams in 1925 that the myth of the Earth-like habitability of Mars was finally broken. However, even in the 1960s, articles were published on Martian biology, putting aside explanations other than life for the seasonal changes on Mars. The current understanding of planetary habitability – the ability of a world to develop environmental conditions favorable to the emergence of life – favors planets that have liquid water on their surface. Most often this requires the orbit of a planet to lie within the habitable zone, which for the Sun is estimated to extend from within the orbit of Earth to about that of Mars. During perihelion, Mars dips inside this region, but Mars's thin (low-pressure) atmosphere prevents liquid water from existing over large regions for extended periods. The past flow of liquid water demonstrates the planet's potential for habitability. Recent evidence has suggested that any water on the Martian surface may have been too salty and acidic to support regular terrestrial life. The environmental conditions on Mars are a challenge to sustaining organic life: the planet has little heat transfer across its surface, it has poor insulation against bombardment by the solar wind due to the absence of a magnetosphere and has insufficient atmospheric pressure to retain water in a liquid form (water instead sublimes to a gaseous state). Mars is nearly, or perhaps totally, geologically dead; the end of volcanic activity has apparently stopped the recycling of chemicals and minerals between the surface and interior of the planet. Evidence suggests that the planet was once significantly more habitable than it is today, but whether living organisms ever existed there remains unknown. The Viking probes of the mid-1970s carried experiments designed to detect microorganisms in Martian soil at their respective landing sites and had positive results, including a temporary increase in CO2 production on exposure to water and nutrients. This sign of life was later disputed by scientists, resulting in a continuing debate, with NASA scientist Gilbert Levin asserting that Viking may have found life. A 2014 analysis of Martian meteorite EETA79001 found chlorate, perchlorate, and nitrate ions in sufficiently high concentrations to suggest that they are widespread on Mars. UV and X-ray radiation would turn chlorate and perchlorate ions into other, highly reactive oxychlorines, indicating that any organic molecules would have to be buried under the surface to survive. Small quantities of methane and formaldehyde detected by Mars orbiters are both claimed to be possible evidence for life, as these chemical compounds would quickly break down in the Martian atmosphere. Alternatively, these compounds may instead be replenished by volcanic or other geological means, such as serpentinite. Impact glass, formed by the impact of meteors, which on Earth can preserve signs of life, has also been found on the surface of the impact craters on Mars. Likewise, the glass in impact craters on Mars could have preserved signs of life, if life existed at the site. The Cheyava Falls rock discovered on Mars in June 2024 has been designated by NASA as a "potential biosignature" and was core sampled by the Perseverance rover for possible return to Earth and further examination. Although highly intriguing, no definitive final determination on a biological or abiotic origin of this rock can be made with the data currently available. Several plans for a human mission to Mars have been proposed, but none have come to fruition. The NASA Authorization Act of 2017 directed NASA to study the feasibility of a crewed Mars mission in the early 2030s; the resulting report concluded that this would be unfeasible. In addition, in 2021, China was planning to send a crewed Mars mission in 2033. Privately held companies such as SpaceX have also proposed plans to send humans to Mars, with the eventual goal to settle on the planet. As of 2024, SpaceX has proceeded with the development of the Starship launch vehicle with the goal of Mars colonization. In plans shared with the company in April 2024, Elon Musk envisions the beginning of a Mars colony within the next twenty years. This would be enabled by the planned mass manufacturing of Starship and initially sustained by resupply from Earth, and in situ resource utilization on Mars, until the Mars colony reaches full self sustainability. Any future human mission to Mars will likely take place within the optimal Mars launch window, which occurs every 26 months. The moon Phobos has been proposed as an anchor point for a space elevator. Besides national space agencies and space companies, groups such as the Mars Society and The Planetary Society advocate for human missions to Mars. In culture Mars is named after the Roman god of war (Greek Ares), but was also associated with the demi-god Heracles (Roman Hercules) by ancient Greek astronomers, as detailed by Aristotle. This association between Mars and war dates back at least to Babylonian astronomy, in which the planet was named for the god Nergal, deity of war and destruction. It persisted into modern times, as exemplified by Gustav Holst's orchestral suite The Planets, whose famous first movement labels Mars "The Bringer of War". The planet's symbol, a circle with a spear pointing out to the upper right, is also used as a symbol for the male gender. The symbol dates from at least the 11th century, though a possible predecessor has been found in the Greek Oxyrhynchus Papyri. The idea that Mars was populated by intelligent Martians became widespread in the late 19th century. Schiaparelli's "canali" observations combined with Percival Lowell's books on the subject put forward the standard notion of a planet that was a drying, cooling, dying world with ancient civilizations constructing irrigation works. Many other observations and proclamations by notable personalities added to what has been termed "Mars Fever". In the present day, high-resolution mapping of the surface of Mars has revealed no artifacts of habitation, but pseudoscientific speculation about intelligent life on Mars still continues. Reminiscent of the canali observations, these speculations are based on small scale features perceived in the spacecraft images, such as "pyramids" and the "Face on Mars". In his book Cosmos, planetary astronomer Carl Sagan wrote: "Mars has become a kind of mythic arena onto which we have projected our Earthly hopes and fears." The depiction of Mars in fiction has been stimulated by its dramatic red color and by nineteenth-century scientific speculations that its surface conditions might support not just life but intelligent life. This gave way to many science fiction stories involving these concepts, such as H. G. Wells's The War of the Worlds, in which Martians seek to escape their dying planet by invading Earth; Ray Bradbury's The Martian Chronicles, in which human explorers accidentally destroy a Martian civilization; as well as Edgar Rice Burroughs's series Barsoom, C. S. Lewis's novel Out of the Silent Planet (1938), and a number of Robert A. Heinlein stories before the mid-sixties. Since then, depictions of Martians have also extended to animation. A comic figure of an intelligent Martian, Marvin the Martian, appeared in Haredevil Hare (1948) as a character in the Looney Tunes animated cartoons of Warner Brothers, and has continued as part of popular culture to the present. After the Mariner and Viking spacecraft had returned pictures of Mars as a lifeless and canal-less world, these ideas about Mars were abandoned; for many science-fiction authors, the new discoveries initially seemed like a constraint, but eventually the post-Viking knowledge of Mars became itself a source of inspiration for works like Kim Stanley Robinson's Mars trilogy. See also Notes References Further reading External links Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Local Volume → Virgo Supercluster → Laniakea Supercluster → Pisces–Cetus Supercluster Complex → Local Hole → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
========================================
[SOURCE: https://en.wikipedia.org/wiki/Job_(software)] | [TOKENS: 662]
Contents Job (computing) In computing, a job is a unit of work or unit of execution (that performs that work). A component of a job (as a unit of work) is called a task or a step (if sequential, as in a job stream). As a unit of execution, a job may be concretely identified with a single process, which may in turn have subprocesses (child processes; the process corresponding to the job being the parent process) which perform the tasks or steps that comprise the work of the job; or with a process group; or with an abstract reference to a process or process group, as in Unix job control. Jobs can be started interactively, such as from a command line, or scheduled for non-interactive execution by a job scheduler, and then controlled via automatic or manual job control. Jobs that have finite input can complete, successfully or unsuccessfully, or fail to complete and eventually be terminated. By contrast, online processing such as by servers has open-ended input (they service requests as long as they run), and thus never complete, only stopping when terminated (sometimes called "canceled"): a server's job is never done. History The term "job" has a traditional meaning as "piece of work", from Middle English "jobbe of work", and is used as such in manufacturing, in the phrase "job production", meaning "custom production", where it is contrasted with batch production (many items at once, one step at a time) and flow production (many items at once, all steps at the same time, by item). Note that these distinctions have become blurred in computing, where the oxymoronic term "batch job" is found, and used either for a one-off job or for a round of "batch processing" (same processing step applied to many items at once, originally punch cards). In this sense of "job", a programmable computer performs "jobs", as each one can be different from the last. The term "job" is also common in operations research, predating its use in computing, in such uses as job shop scheduling (see, for example Baker & Dzielinski (1960) and references thereof from throughout the 1950s, including several "System Research Department Reports" from IBM Research Center). This analogy is applied to computer systems, where the system resources are analogous to machines in a job shop, and the goal of scheduling is to minimize the total time from beginning to end (makespan). The term "job" for computing work dates to the mid 1950s, as in this use from 1955: "The program for an individual job is then written, calling up these subroutines by name wherever required, thus avoiding rewriting them for individual problems". The term continued in occasional use, such as for the IBM 709 (1958), and in wider use by early 1960s, such as for the IBM 7090, with widespread use from the Job Control Language of OS/360 (announced 1964). A standard early use of "job" is for compiling a program from source code, as this is a one-off task. The compiled program can then be run on batches of data. See also Further reading References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Special:BookSources/0313291896] | [TOKENS: 380]
Contents Book sources This page allows users to search multiple sources for a book given a 10- or 13-digit International Standard Book Number. Spaces and dashes in the ISBN do not matter. This page links to catalogs of libraries, booksellers, and other book sources where you will be able to search for the book by its International Standard Book Number (ISBN). Online text Google Books and other retail sources below may be helpful if you want to verify citations in Wikipedia articles, because they often let you search an online version of the book for specific words or phrases, or you can browse through the book (although for copyright reasons the entire book is usually not available). At the Open Library (part of the Internet Archive) you can borrow and read entire books online. Online databases Subscription eBook databases Libraries Alabama Alaska California Colorado Connecticut Delaware Florida Georgia Illinois Indiana Iowa Kansas Kentucky Massachusetts Michigan Minnesota Missouri Nebraska New Jersey New Mexico New York North Carolina Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Washington state Wisconsin Bookselling and swapping Find your book on a site that compiles results from other online sites: These sites allow you to search the catalogs of many individual booksellers: Non-English book sources If the book you are looking for is in a language other than English, you might find it helpful to look at the equivalent pages on other Wikipedias, linked below – they are more likely to have sources appropriate for that language. Find other editions The WorldCat xISBN tool for finding other editions is no longer available. However, there is often a "view all editions" link on the results page from an ISBN search. Google books often lists other editions of a book and related books under the "about this book" link. You can convert between 10 and 13 digit ISBNs with these tools: Find on Wikipedia See also Get free access to research! Research tools and services Outreach Get involved
========================================
[SOURCE: https://en.wikipedia.org/wiki/Europa_Clipper] | [TOKENS: 6042]
Contents Europa Clipper Europa Clipper (previously known as the Europa Multiple Flyby Mission) is a space probe developed by NASA to study Europa, a Galilean moon of Jupiter. It was launched on October 14, 2024. The spacecraft used a gravity assist from Mars on March 1, 2025, and will use a gravity assist from Earth on December 3, 2026, before arriving at Europa in April 2030. The spacecraft will then perform a series of flybys of Europa while orbiting Jupiter. Europa Clipper is designed to study evidence for a subsurface ocean underneath Europa's ice crust, found by the Galileo spacecraft which orbited Jupiter from 1995 to 2003. Plans to send a spacecraft to Europa were conceived with projects such as Europa Orbiter and Jupiter Icy Moons Orbiter, in which a spacecraft would be inserted into orbit around Europa. However, due to the effects of radiation from the magnetosphere of Jupiter in Europa orbit, it was decided that it would be safer to insert a spacecraft into an elliptical orbit around Jupiter and make 49 close flybys of the moon instead. The Europa Clipper spacecraft is larger than any previous spacecraft used for NASA planetary missions. The orbiter will analyze the induced magnetic field around Europa, and attempt to detect plumes of water ejecta from a subsurface ocean; in addition to various other tests. The mission's name is a reference to the lightweight, fast clipper ships of the 19th century that routinely plied trade routes, since the spacecraft will pass by Europa at a rapid cadence, as frequently as every two weeks. The mission patch, which depicts a sailing ship, references the moniker. Europa Clipper complements the ESA's Jupiter Icy Moons Explorer, launched in 2023, which will attempt to fly past Europa twice and Callisto multiple times before moving into orbit around Ganymede. History In 1997, a Europa Orbiter mission was proposed by a team for NASA's Discovery Program but was not selected. NASA's JPL announced one month after the selection of Discovery proposals that a NASA Europa orbiter mission would be conducted. JPL then invited the Discovery proposal team to be the Mission Review Committee (MRC).[citation needed] At the same time as the proposal of the Discovery-class Europa Orbiter, the robotic Galileo spacecraft was already orbiting Jupiter. From December 8, 1995, to December 7, 1997, Galileo conducted the primary mission after entering the orbit of Jupiter. On that final date, the Galileo orbiter commenced an extended mission known as the Galileo Europa Mission (GEM), which ran until December 31, 1999. This was a low-cost mission extension with a budget of only US$30 million. The smaller team of about 40–50 people (compared with the primary mission's 200-person team from 1995 to 1997) did not have the resources to deal with problems, but when they arose, it was able to temporarily recall former team members (called "tiger teams") for intensive efforts to solve them. The spacecraft made several flybys of Europa (8), Callisto (4) and Io (2). On each flyby of the three moons it encountered, the spacecraft collected only two days' worth of data instead of the seven it had collected during the primary mission. During GEM's eight flybys of Europa, it ranged from 196 to 3,582 km (122 to 2,226 mi), in two years. Europa has been identified as one of the locations in the Solar System that could possibly harbor microbial extraterrestrial life. Immediately following the Galileo spacecraft's discoveries and the independent Discovery program proposal for a Europa orbiter, JPL conducted preliminary mission studies that envisioned a capable spacecraft such as the Jupiter Icy Moons Orbiter (a US$16 billion mission concept), the Jupiter Europa Orbiter (a US$4.3 billion concept), another orbiter (US$2 billion concept), and a multi-flyby spacecraft: Europa Clipper. A mission to Europa was recommended by the National Research Council in 2013. The approximate cost estimate rose from US$2 billion in 2013 to US$4.25 billion in 2020. The mission is a joint project between the Johns Hopkins University's Applied Physics Laboratory (APL), and the Jet Propulsion Laboratory (JPL). In March 2013, US$75 million was authorized to expand on the formulation of mission activities, mature the proposed science goals, and fund preliminary instrument development, as suggested in 2011 by the Planetary Science Decadal Survey. In May 2014, a House bill substantially increased the Europa Clipper (referred to as Europa Multiple Flyby Mission) funding budget for the 2014 fiscal year from US$15 million to US$100 million to be applied to pre-formulation work. Following the 2014 election cycle, bipartisan support was pledged to continue funding for the Europa Multiple Flyby Mission project. The executive branch also granted US$30 million for preliminary studies. In April 2015, NASA invited the ESA to submit concepts for an additional probe to fly together with the Europa Clipper spacecraft, with a mass limit of 250 kg. It could be a simple probe, an impactor, or a lander. An internal assessment at ESA considered whether there was interest and funds available, opening a collaboration scheme similar to the very successful Cassini–Huygens approach. In May 2015, NASA chose nine instruments that would fly on board the orbiter, budgeted to cost about US$110 million over the next three years. In June 2015, NASA approved the mission concept, allowing the orbiter to move to its formulation stage. In January 2016, NASA approved the addition of a lander, but this was canceled in 2017 because it was deemed too risky. In May 2016, the Ocean Worlds Exploration Program was approved, of which the Europa mission is part. In February 2017, the mission moved from Phase A to Phase B (the preliminary design phase). On July 18, 2017, the House Space Subcommittee held hearings on the Europa Clipper as a scheduled Large Strategic Science Missions class, and to discuss a possible follow up mission simply known as the Europa Lander. Phase B continued into 2019. In addition, subsystem vendors were selected, as well as prototype hardware elements for the science instruments. Spacecraft sub-assemblies were built and tested as well. On August 19, 2019, the Europa Clipper proceeded to Phase C: final design and fabrication. On March 3, 2022, the spacecraft moved on to Phase D: assembly, testing, and launch. On June 7, 2022, the main body of the spacecraft was completed. By August 2022, the high-gain antenna had completed its major testing campaigns. By January 30, 2024, all of the science instruments were added to the spacecraft. In March 2024, it was reported that the spacecraft underwent successful testing and was on track for launch later in the year. In May 2024, the spacecraft arrived at Kennedy Space Center for final launch preparations. In September 2024, final pre-launch review was successfully completed, clearing the way for launch. In early October 2024, due to the incoming Hurricane Milton, the spacecraft was placed in secure storage for safekeeping until the hurricane passed. In July 2024, the spacecraft faced concerns of delay and missing the launch window because of a discovery in June 2024 that its components were not as radiation-hardened as previously believed. However, over the summer, intensive re-testing of the transistor components in question found that they would likely be annealed enough to 'self-heal'. In September 2024, Europa Clipper was approved for a launch window opening on October 10, 2024; however, on October 6, 2024, NASA announced that it would be standing down from the October 10 launch due to Hurricane Milton. Europa Clipper was finally launched on October 14, 2024. The probe is scheduled to be crashed into Jupiter, Ganymede, or Callisto, to prevent it from crashing into Europa. In June 2022, lead project scientist Robert Pappalardo revealed that mission planners for Europa Clipper were considering disposing of the probe by crashing it into the surface of Ganymede in case an extended mission was not approved early in the main science phase. He noted that an impact would help the ESA's JUICE mission collect more information about Ganymede's surface chemistry. In a 2024 paper, Pappalardo said the mission would last four years in Jupiter orbit, and that disposal was targeted for September 3, 2034 if NASA did not approve a mission extension. Objectives The goals of Europa Clipper are to explore Europa, investigate its habitability and aid in the selection of a landing site for the proposed Europa Lander. This exploration is focused on understanding the three main requirements for life: liquid water, chemistry, and energy. Specifically, the objectives are to study: The spacecraft carries scientific instruments which will be used to analyze the potential presence of geothermal activity and the moon's induced magnetic field; which in turn will provide an indication to the presence of saline rich subsurface ocean(s). Because Europa lies well within the harsh radiation fields surrounding Jupiter, even a radiation-hardened spacecraft in near orbit would only remain functional for just a few months. Most instruments can gather data far faster than the communications system can transmit it to Earth due to the limited number of antennas available on Earth to receive the scientific data. Therefore, another key limiting factor on science for a Europa orbiter is the time available to return data to Earth. In contrast, the amount of time during which the instruments can make close-up observations is less important. Studies by scientists from the Jet Propulsion Laboratory show that by performing several flybys with many months to return data, the Europa Clipper concept will enable a US$2 billion mission to conduct the most crucial measurements of the canceled US$4.3 billion Jupiter Europa Orbiter concept. Between each of the flybys, the spacecraft will have seven to ten days to transmit data stored during each brief encounter. That will let the spacecraft have up to a year of time to transmit its data compared to just 30 days for an orbiter. The result will be almost three times as much data returned to Earth, while reducing exposure to radiation. Europa Clipper will not orbit Europa, but will instead orbit Jupiter and conduct 49 flybys of Europa, each at altitudes ranging from 25 to 2,700 km (16 to 1,678 mi) during its 3.5-year mission. A key feature of the mission concept is that Europa Clipper would use gravity assists from Europa, Ganymede and Callisto to change its trajectory, allowing the spacecraft to return to a different close approach point with each flyby. Each flyby would cover a different sector of Europa to achieve a medium-quality global topographic survey, including ice thickness. Europa Clipper could conceivably fly by at low altitude through the plumes of water vapor erupting from the moon's ice crust, thus sampling its subsurface ocean without having to land on the surface and drill through the ice. The spacecraft is expected to receive a total ionizing dose of 2.8 megarads (28 kGy) during the mission. Shielding from Jupiter's harsh radiation belt will be provided by a radiation vault with 9.2 mm (0.36 in) thick aluminum alloy walls, which enclose the spacecraft electronics. To maximize the effectiveness of this shielding, the electronics are also nested in the core of the spacecraft for additional radiation protection. Design and construction Europa Clipper is a NASA Planetary Science Division mission, designated a Large Strategic Science Mission, and funded under the Planetary Missions Program Office's Solar System Exploration program as its second flight. It is also supported by the new Ocean Worlds Exploration Program. The spacecraft bus is a 5-meter-long combination of a 150-cm-wide aluminum cylindrical propulsion module and a rectangular box. The electronic components are protected from the intense radiation by a 150-kilogram titanium, zinc and aluminum shielded vault in the box. Both radioisotope thermoelectric generator (RTG) and photovoltaic power sources were assessed as options to power the orbiter. Although solar power is only 4% as intense at Jupiter as it is in Earth's orbit, powering a Jupiter orbital spacecraft with solar panels was demonstrated by the Juno mission. The alternative to solar panels was a multi-mission radioisotope thermoelectric generator (MMRTG), fueled with plutonium-238. The power source had already been demonstrated in the Mars Science Laboratory (MSL) mission. Five units were available, with one reserved for the Mars 2020 rover mission and another as backup. In September 2013, it was decided that the solar array was the less expensive option to power the spacecraft, and on October 3, 2014, it was announced that solar panels were chosen to power Europa Clipper. The mission's designers determined that solar power was both cheaper than plutonium and practical to use on the spacecraft. Despite the increased weight of solar panels compared to plutonium-powered generators, the vehicle's mass was projected to still be within acceptable launch limits. Each panel has a surface area of 18 m2 (190 sq ft) and produces 150 watts continuously when pointed towards the Sun while orbiting Jupiter. When in Europa's shadow, onboard batteries charged by the solar panels will enable the spacecraft to continue gathering data. However, ionizing radiation can damage solar panels. The Europa Clipper's orbit will pass through Jupiter's intense magnetosphere, which is expected to gradually degrade the solar panels as the mission progresses. The solar panels were provided by Airbus Defence and Space, Netherlands. The propulsion subsystem was built by NASA's Goddard Space Flight Center in Greenbelt, Maryland. It is part of the Propulsion Module, delivered by Johns Hopkins Applied Physics Laboratory in Laurel, Maryland. It is 3 metres (10 ft) tall, 1.5 metres (5 ft) in diameter and comprises about two-thirds of the spacecraft's main body. The propulsion subsystem carries nearly 2,700 kilograms (6,000 lb) of monomethyl hydrazine and dinitrogen tetroxide propellant, 50% to 60% of which will be used for the 6 to 8-hour Jupiter orbit insertion burn. The spacecraft has a total of 24 rocket engines rated at 27.5 N (6.2 lbf) thrust for attitude control and propulsion. The spacecraft includes a suite of antennas for communication and scientific measurements. Chief among them is the high-gain antenna (HGA), which has a 3.1-meter (10-foot) diameter and is capable of both uplink and downlink communications over multiple frequency bands. The HGA operates on X-band frequencies of 7.2 GHz (uplink) and 8.4 GHz (downlink), as well as a Ka-band frequency of 32 GHz, approximately 12 times higher than typical cellular communications. The communication system includes additional antennas such as low-gain antennas (LGAs), medium-gain antennas (MGAs), and fan-beam antennas (FBAs), which are used for different mission phases depending on orientation and distance from Earth. The Ka-band is primarily used for high-rate data return, enabling faster transmission of scientific data. Data rates vary depending on antenna alignment, frequency, and ground station availability. Downlink data rates via X-band can reach approximately 16 kilobits per second, while Ka-band transmissions can reach up to 500 kilobits per second under optimal conditions. Uplink rates for command transmission are typically around 2 kilobits per second. The antenna system supports not only communications but also radio science and gravity science experiments. Using coherent two-way X-band Doppler tracking and radio occultation techniques, researchers will study Europa's internal structure, ice shell thickness, ocean characteristics, and gravity field. Small variations in the spacecraft's velocity—detected via Doppler shifts—will help scientists determine the moon's mass distribution and potential subsurface ocean. The HGA was designed and developed under the leadership of Matt Bray at the Johns Hopkins Applied Physics Laboratory (APL), and underwent rigorous testing at Langley Research Center and Goddard Space Flight Center in 2022, including beam pattern, thermal vacuum, and vibration testing to ensure precision and reliability. The Europa Clipper mission is equipped with nine scientific instruments. The nine science instruments for the orbiter, announced in May 2015, have a planned total mass of 82 kg (181 lb).[needs update] The Europa Thermal Emission Imaging System will provide high spatial resolution as well as multi-spectral imaging of the surface of Europa in the mid to far infrared bands to help detect heat which would suggest geologically active sites and areas, such as potential vents erupting plumes of water into space. The principal investigator is Philip Christensen of Arizona State University. This instrument is derived from the Thermal Emission Imaging System (THEMIS) on the 2001 Mars Odyssey orbiter, also developed by Philip Christensen. The Mapping Imaging Spectrometer for Europa is an imaging near infrared spectrometer to probe the surface composition of Europa, identifying and mapping the distributions of organics (including amino acids and tholins), salts, acid hydrates, water ice phases, and other materials. The principal investigator is Diana Blaney of Jet Propulsion Laboratory and the instrument was built in collaboration with the Johns Hopkins University Applied Physics Laboratory (APL). The Europa Imaging System consists of visible spectrum cameras to map Europa's surface and study smaller areas in high resolution, as low as 0.5 m (20 in) per pixel. It consists of two cameras, both of which use 2048x4096 pixel CMOS detectors: The principal investigator is Elizabeth Turtle of the Applied Physics Laboratory. The Europa Ultraviolet Spectrograph instrument will be able to detect small erupting plumes, and will provide valuable data about the composition and dynamics of the moon's exosphere. The principal investigator is Kurt Retherford of Southwest Research Institute. Retherford was previously a member of the group that discovered plumes erupting from Europa while using the Hubble Space Telescope in the UV spectrum. The Radar for Europa Assessment and Sounding: Ocean to Near-surface (REASON) is a dual-frequency ice penetrating radar (9 and 60 MHz) instrument that is designed to sound Europa's ice crust from the near-surface to the ocean, revealing the hidden structure of Europa's ice shell and potential water pockets within. REASON will probe the exosphere, surface and near-surface and the full depth of the ice shell to the ice-ocean interface up to 30 km. The principal investigator is Donald Blankenship of the University of Texas at Austin. This instrument was built by Jet Propulsion Laboratory. The Europa Clipper Magnetometer (ECM) will be used to analyze the magnetic field around Europa. The instrument consists of three flux gates placed along an 8.5 metres (28 feet) boom, which were stowed during launch and deployed afterwards. The magnetic field of Jupiter is thought to induce electric current in a salty ocean beneath Europa's ice, which in turn leads Europa to produce its own magnetic field, therefore by studying the strength and orientation of Europa's magnetic field over multiple flybys, scientists hope to be able to confirm the existence of Europa's subsurface ocean, as well as characterize the thickness of its icy crust and estimate the water's depth and salinity. The instrument team leader is Margaret Kivelson, University of Michigan. ECM replaced the proposed Interior Characterization of Europa using Magnetometry (ICEMAG) instrument, which was canceled due to cost overruns. ECM is a simpler and cheaper magnetometer than ICEMAG would have been. The Plasma Instrument for Magnetic Sounding (PIMS) measures the plasma surrounding Europa to characterize the magnetic fields generated by plasma currents. These plasma currents mask the magnetic induction response of Europa's subsurface ocean. In conjunction with a magnetometer, it is key to determining Europa's ice shell thickness, ocean depth, and salinity. PIMS will also probe the mechanisms responsible for weathering and releasing material from Europa's surface into the atmosphere and ionosphere and understanding how Europa influences its local space environment and Jupiter's magnetosphere. The principal investigator is Joseph Westlake of the Applied Physics Laboratory. The Mass Spectrometer for Planetary Exploration (MASPEX) will determine the composition of the surface and subsurface ocean by measuring Europa's extremely tenuous atmosphere and any surface materials ejected into space. Jack Waite, who led development of MASPEX, was also Science Team Lead of the Ion and Neutral Mass Spectrometer (INMS) on the Cassini spacecraft. The principal investigator is Jim Burch of Southwest Research Institute, who was previously the leader of the Magnetospheric Multiscale Mission. The SUrface Dust Analyzer (SUDA) is a mass spectrometer that will measure the composition of small solid particles ejected from Europa, providing the opportunity to directly sample the surface and potential plumes on low-altitude flybys. The instrument is capable of identifying traces of organic and inorganic compounds in the ice of ejecta, and is sensitive enough to detect signatures of life even if the sample contains less than a single bacterial cell in a collected ice grain. The principal investigator is Sascha Kempf of the University of Colorado Boulder. Although it was designed primarily for communications, the high-gain radio antenna will be used to perform additional radio observations and investigate Europa's gravitational field, acting as a radio science subsystem. Measuring the Doppler shift in the radio signals between the spacecraft and Earth will allow the spacecraft's motion to be determined in detail. As the spacecraft performs each of its 45 Europa flybys, its trajectory will be altered by the moon's gravitational field. The Doppler data will be used to determine the higher order coefficients of that gravity field, to determine the moon's interior structure, and to examine how Europa is deformed by tidal forces. The instrument team leader is Erwan Mazarico of NASA's Goddard Space Flight Center. Launch and trajectory Congress had originally mandated the Europa Clipper to launch on NASA's Space Launch System (SLS) super heavy-lift launch vehicle, but NASA had requested that other vehicles be allowed to launch the spacecraft due to a foreseen lack of available SLS vehicles. The United States Congress's 2021 omnibus spending bill directed the NASA Administrator to conduct a full and open competition to select a commercial launch vehicle if the conditions to launch the probe on a SLS rocket cannot be met. On January 25, 2021, NASA's Planetary Missions Program Office formally directed the mission team to "immediately cease efforts to maintain SLS compatibility" and move forward with a commercial launch vehicle. On February 10, 2021, it was announced that the mission would use a 5.5-year trajectory to the Jovian system, with gravity-assist maneuvers involving Mars (March 1, 2025) and Earth (December 3, 2026). Launch was targeted for a 21-day period between October 10 and 30, 2024, giving an arrival date in April 2030, and backup launch dates were identified in 2025 and 2026. The SLS option would have entailed a direct trajectory to Jupiter taking less than three years. One alternative to the direct trajectory was identified as using a commercial rocket, with a longer 6-year cruise time involving gravity assist maneuvers at Venus, Earth and/or Mars. Additionally, a launch on a Delta IV Heavy with a gravity assist at Venus was considered. In July 2021 the decision was announced to launch on a Falcon Heavy rocket, in a fully expendable configuration. Three reasons were given: reasonable launch cost (ca. $178 million), questionable SLS availability, and possible damage to the payload due to strong vibrations caused by the solid boosters attached to the SLS launcher. The move to Falcon Heavy saved an estimated US$2 billion in launch costs alone. NASA was not sure an SLS would be available for the mission since the Artemis program would use SLS rockets extensively, and the SLS's use of solid rocket boosters (SRBs) generates more vibrations in the payload than a launcher that does not use SRBs. The cost to redesign Europa Clipper for the SLS vibratory environment was estimated at US$1 billion. Europa Clipper was originally scheduled to launch on October 10, two days after a Falcon 9 launched the ESA's Hera to 65803 Didymos from Cape Canaveral Space Force Station on a similar interplanetary trajectory. However, this launch attempt was scrubbed due to Hurricane Milton making landfall in Florida the previous day, resulting in the launch being finalized for several days later. Europa Clipper was launched on October 14, 2024, at 12:06 p.m. EDT from Launch Complex 39A at NASA's Kennedy Space Center on a Falcon Heavy. The rocket's boosters and first stage were both expended as a result of the spacecraft's mass and trajectory; the boosters were previously flown five times (including on the launch of Psyche for NASA and an X-37B for the United States Space Force), while the center stage was only flown for this mission. The trajectory of Europa Clipper started with a gravity assist from Mars on March 1, 2025, causing the probe to slow down a little (speed reduced by 2 kilometers per second) and modifying its orbit around the Sun such that it will allow the spacecraft to fly by Earth on December 3, 2026, gaining additional speed. The probe will then arc (reach aphelion) beyond Jupiter's orbit on October 4, 2029 before slowly falling into Jupiter's gravity well and executing its orbital insertion burn in April 2030. After entry into the Jupiter system, Europa Clipper will perform a flyby of Ganymede at an altitude of 500 km (310 mi), which will reduce the spacecraft velocity by ~400 m/s (890 mph). This will be followed by firing the main engine at a distance of 11 Rj (Jovian radii), to provide a further ~840 m/s (1,900 mph) of delta-V, sufficient to insert the spacecraft into a 202-day orbit around Jupiter. Once the spacecraft reaches the apoapsis of that initial orbit, it will perform another engine burn to provide a ~122 m/s (270 mph) periapsis raise maneuver (PRM).[needs update] The spacecraft's cruise and science phases will overlap with the ESA's Juice spacecraft, which was launched in April 2023 and will arrive at Jupiter in July 2031. Europa Clipper is due to arrive at Jupiter 15 months prior to Juice, despite a launch date planned 18 months later, owing to a more powerful launch vehicle and a faster flight plan with fewer gravity assists. Public outreach To raise public awareness of the Europa Clipper mission, NASA undertook a "Message in a Bottle" campaign, i.e. an actual "Send Your Name to Europa" campaign on June 1, 2023, through which people around the world were invited to send their names as signatories to a poem called "In Praise of Mystery: A Poem for Europa" written by the U.S. Poet Laureate Ada Limón, for the 2.9-billion-kilometer (1.8-billion mi) voyage to Jupiter. The poem describes the connections between Earth and Europa. The poem is engraved on Europa Clipper inside a tantalum metal plate, about 7 by 11 inches (18 by 28 centimeters), that seals an opening into the vault. The inward-facing side of the metal plate is engraved with the poem in the poet's own handwriting. The public participants' names are etched onto a microchip attached to the plate, within an artwork of a wine bottle surrounded by the four Galilean moons. After registering their names, participants received a digital ticket with details of the mission's launch and destination. According to NASA, 2,620,861 people signed their names to Europa Clipper's Message in a Bottle, most of whom were from the United States. Other elements etched on the inwards side together with the poem and names are the Drake equation, representations of the spectral lines of a hydrogen atom and the hydroxyl radical, together known as the water hole, and a portrait of planetary scientist Ron Greeley. The outward-facing panel features art that highlights Earth's connection to Europa. Linguists collected recordings of the word "water" spoken in 103 languages, from families of languages around the world. The audio files were converted into waveforms and etched into the plate. The waveforms radiate out from a symbol representing the American Sign Language sign for "water". The research organization METI International gathered the audio files for the words for "water", and its president Douglas Vakoch designed the water hole component of the message. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Fort_Caroline] | [TOKENS: 1953]
Contents Fort Caroline Fort Caroline was an attempted French colonial settlement in Florida, located on the banks of the St. Johns River in present-day Duval County. It was established under the leadership of René Goulaine de Laudonnière on 22 June 1564, following King Charles IX's enlisting of Jean Ribault and his Huguenot settlers to stake a claim in French Florida ahead of Spain. The French colony came into conflict with the Spanish, who established St. Augustine on 8 September 1565, and Fort Caroline was sacked by Spanish troops under Pedro Menéndez de Avilés on 20 September. The Spanish continued to occupy the site as San Mateo until 1569. The exact site of the former fort is unknown. In 1953 the National Park Service established the Fort Caroline National Memorial along the southern bank of the St. John's River near the point that commemorates Laudonnière's first landing. This is generally accepted by scholars as being in the vicinity of the original fort, though probably not the exact location. The memorial is now managed as a part of the Timucuan Ecological and Historic Preserve, but it is also a distinct unit under administration of the National Park Service. History A French expedition, organized by Protestant leader Admiral Gaspard de Coligny and led by the French Explorer Jean Ribault, had landed at the site on the May River (now the St. Johns River) in May 1562. Here Ribault encountered the Timucuans led by Chief Saturiwa. Ribault took some 28 troops north along the coast, where on present-day Parris Island, South Carolina they developed a settlement known as Charlesfort. Ribault returned to Europe to arrange supplies for the new colony. When he was captured and briefly imprisoned in England on suspicion of spying related to the French Wars of Religion, he was prevented from returning to Florida. After a year without supplies or leadership, and beset by hostility from the native populations, all but one of the colonists left Charlesfort to sail back to Europe. During their voyage in an open boat, they were reduced to cannibalism before the survivors were rescued in English waters. Another French force reestablished a fort at the site in 1577–1578. Meanwhile, René Goulaine de Laudonnière, who had been Ribault's second-in-command on the 1562 expedition, led a contingent of around 200 new settlers back to Florida, where they founded Fort Caroline (or Fort de la Caroline) on 22 June, 1564; the site was on a small plain formed by the western slope of the high steep bank later called St. Johns Bluff. The fort was named for King Charles IX of France. For just over a year, this settlement was beset by hunger and desertion, and attracted the attention of Spanish authorities who considered it a challenge to their control over the area. The French colonists "had to rely heavily on the Indians" for both food and trade. The Timucua welcomed them. French soldiers also traveled across Timucuan territory, encountering the Yustaga people and unsuccessfully seeking gold and silver mines. Timucua chief Outina twice "coaxed the French into participating in attacks on villages of his rival, [the] Potano, to seize surplus corn." French soldiers who deserted from the fort raided Timucua settlements, souring relations with them. In spring 1565, Outina rebuffed a third request for food and was taken hostage by the French, provoking open confrontation with the Timucua that included "two tense weeks of skirmishes and one all-out battle." The French relented and released Outina. On 20 July, 1565, the English adventurer John Hawkins arrived at the fort with his fleet looking for fresh water; there he exchanged his smallest ship for four cannons and a supply of powder and shot. The ship and provisions gained from Hawkins enabled the French to survive and prepare to move back to France as soon as possible. As Laudonnière writes: "I may saye that wee receaved as manye courtesies of the Generall, as it was possible to receive of any man living. Wherein doubtlesse hee hath wonne the reputation of a good and charitable man, deserving to be esteemed as much of us all as if hee had saved all our lives." The French introduced Hawkins to tobacco, which they all were using, and in turn he introduced it to England upon his return. In late August, Ribault, who had been released from English custody in June 1565 and sent by Coligny back to Florida, arrived at Fort Caroline with a large fleet and hundreds of soldiers and settlers, taking command of the colony. However, the recently appointed Spanish Governor of Florida, Don Pedro Menéndez de Avilés, had simultaneously been dispatched from Spain with orders to remove the French outpost, and arrived within days of Ribault's landing. After a brief skirmish between Ribault's ships and Menéndez's ships, the latter retreated 35 miles (56 km) southward, where they established the settlement of St. Augustine. Ribault pursued the Spanish with several of his ships and most of his troops, but he was surprised at sea by a violent storm lasting several days. Meanwhile, Menéndez launched an assault on Fort Caroline by marching his forces overland during the storm, leading a surprise dawn attack on Fort Caroline on 20 September. At this time, the garrison contained 200 to 250 people. The only survivors were about 50 women and children who were taken prisoner and a few defenders, including Laudonnière, who managed to escape; the rest were massacred. As for Ribault's fleet, all of the ships either sank or ran aground south of St. Augustine during the storm, and many of the Frenchmen on board were lost at sea. Ribault and his marooned sailors marched northwards and were eventually located by Menéndez with his troops and summoned to surrender. Apparently believing that his men would be well treated, Ribault capitulated. Menéndez then executed Ribault and several hundred Huguenots (Francisco López de Mendoza Grajales, chaplain to the Spanish forces, identifies them as "all Lutherans" and dates their execution 29 September 1565, St. Michael's Day) as heretics at what is now known as the Matanzas Inlet. (Matanzas is Spanish for "slaughters".) The atrocity shocked Europeans even in that bloody era of religious strife. A fort built much later, Fort Matanzas, is in the vicinity of the site. This massacre ended France's attempts at colonization of the southeastern Atlantic coast of North America until 1577–1578 when Nicholas Strozzi and his crew built a fort after their ship, Le Prince, was wrecked at Port Royal Sound. The Spanish destroyed Fort Caroline and built their own fort on the same site. In April 1568, Dominique de Gourgues led a French force which attacked, captured and burned the fort. He then slaughtered the Spanish prisoners in revenge for the 1565 massacre. The Spanish rebuilt, but permanently abandoned the fort the following year. The exact location of the fort is not known. Free Black population at Fort Caroline When the Spanish conquistador Pedro Menéndez, who had black crew members in his fleet, founded St. Augustine in 1565, he wrote that his settlers had been preceded by free Africans in the French settlement at Fort Caroline. The fort also employed Black slave labor. Together, Fort Caroline and the St. Augustine area represent some of the earliest points of history for the Black (and Black Catholic) community of what would become the United States. Reproductions of Fort Caroline and speculation The original site of Fort de la Caroline has never been determined, but it is believed to have been located near the present-day Fort Caroline National Memorial. The National Park Service constructed an outdoor exhibit of the original fort in 1964, but it was destroyed by Hurricane Dora in the same year. Today, the second replica, a near full-scale "interpretive model" of the original Fort de la Caroline, also constructed and maintained by the National Park Service, illustrates the modest defenses upon which the 16th-century French colonists depended. Proposed alternative location On 21 February 2014, researchers Fletcher Crowe and Anita Spring presented claims at a conference hosted by Florida State University that Fort Caroline was located not on the St. Johns River, but on the Altamaha River in southeast Georgia. The scholars proposed that period French maps, particularly a 1685 map of "French Florida" from the Bibliothèque Nationale de France, support the more northern location. They further argued that the Native Americans living near the fort spoke Guale, the language spoken in what is now Coastal Georgia, rather than Timucua, the language of northeast Florida. Other scholars have been skeptical of the hypothesis. University of North Florida archaeologist Robert Thunen considers the documentary evidence weak and believes the location is implausibly far from St. Augustine, considering the Spanish were able to march overland to Fort Caroline in two days amid a hurricane. Chuck Meide, archaeologist at the St. Augustine Lighthouse and Museum, expressed similar criticism on the museum's blog, noting that other French and American scholars at the conference seemed similarly skeptical. Gallery See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_note-100] | [TOKENS: 10728]
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References
========================================