text
stringlengths
0
473k
[SOURCE: https://en.wikipedia.org/wiki/Mastercard] | [TOKENS: 7544]
Contents Mastercard Mastercard Inc. (stylized as MasterCard from 1979 to 2016 and as mastercard from 2016 to 2019) is an American multinational payment card services corporation headquartered in Purchase, New York. It offers a range of payment transaction processing and other related-payment services (such as travel-related payments and bookings). Throughout the world, its principal business is to process payments between the banks of merchants and the card-issuing banks or credit unions of the purchasers who use the Mastercard-brand debit, credit and prepaid cards to make purchases. Mastercard has been publicly traded since 2006. Mastercard (originally Interbank, then Master Charge) was created by an alliance of several banks and regional bankcard associations in response to the BankAmericard issued by Bank of America, which later became Visa and is still its biggest competitor. Along with Visa, Mastercard has faced numerous antitrust lawsuits. Prior to its initial public offering, Mastercard Worldwide was a cooperative owned by the more than 25,000 financial institutions that issue its branded cards. History Although BankAmericard's debut in September 1958 was a failure, it began to turn a profit by May 1961. Bank of America deliberately kept this information secret and allowed then-widespread negative impressions to linger in order to ward off competition. This strategy was successful until 1966, when BankAmericard's profitability had become far too big to hide. From 1960 to 1966, there were only 10 new credit cards introduced in the United States, but from 1966 to 1968, approximately 440 credit cards were introduced by banks large and small throughout the country. These newcomers promptly banded together into regional bankcard associations. One reason why most banks chose to join forces was that at the time, 16 states limited the ability of banks to operate through branch locations, while 15 states entirely prohibited branch banking and required unit banking. A unit bank can legally operate only at a single site and is thereby forced to remain very small. By joining a regional bankcard association, a unit bank could quickly add a credit card to its lineup of financial products, and achieve economies of scale by outsourcing tedious back office tasks like card servicing to the association. Such associations also enabled unit banks to aggregate their customer bases and merchant networks in order to make a credit card useful for both customers and merchants; early credit cards had failed because they could only be used within a small radius around their respective issuing banks. In 1966, Karl H. Hinke, an executive vice president at Marine Midland Bank, asked representatives of several other banks to meet him in Buffalo, New York. Marine Midland had just launched its own regional bankcard in the Upstate New York market after Bank of America declined its request for a BankAmericard regional license because Marine Midland was too big. The result of the Buffalo meeting was that several banks and regional bankcard associations soon agreed to join forces as Interbankard, Inc., which then became the Interbank Card Association (ICA). By the end of 1967, ICA had 150 members and Hinke became ICA's chairman. Bank of America eventually joined Mastercard as well. (In the 21st century, Bank of America would revive the BankAmericard brand name as a Mastercard credit card, which it remains today). The Interbank branding in 1966 initially consisted only of a small unobtrusive lowercase i inside a circle in the lower right-hand corner of the front of each Interbank card; the rest of the card design was the prerogative of each issuing bank. This tiny logo proved to be entirely unsatisfactory for creating nationwide brand awareness in order to compete against the established leader, BankAmericard. In 1969, Interbank developed a new national brand, "Master Charge: The Interbank Card" by combining the two overlapping yellow and orange circles of the Western States Bankcard Association with the "Master Charge" name coined by the First National Bank of Louisville, Kentucky. That same year, First National City Bank joined Interbank and merged its proprietary Everything Card program with Master Charge. In 1968, the ICA and Eurocard started a strategic alliance, which effectively allowed the ICA access to the European market, and for Eurocard to be accepted on the ICA network. The Access card system from the United Kingdom joined the ICA/Eurocard alliance in 1972. In 1979, Master Charge: The Interbank Card was renamed MasterCard. Beginning in 1980 the company rolled out new cards with a refreshed logo. Cards retained the overlapping red and yellow circles first adopted in 1969; subsequent card designs have continued to use this motif. In 1983, Mastercard International Inc. became the first bank to use holograms as part of their card security. They acquired the Cirrus network of automated tellers in 1985. In 1997, Mastercard took over the Access card; the Access brand was then retired.[citation needed] In 2002, MasterCard International merged with Europay International, another large credit-card issuer association, of which Eurocard had become a part in 1992. Mastercard became a Delaware corporation in connection with the merger, as well as in anticipation of an IPO. The company, which had been organized as a cooperative of banks, had an initial public offering on May 25, 2006, selling 95.5 million shares at $39 each. The stock is traded on the NYSE under the symbol MA, with a market capitalization of $434 billion as of April 2024. The deal was designed to maintain the value of the brand and minimise regulatory costs. In August 2010, Mastercard Worldwide, as it had been rebranded, expanded its e-commerce offering with the acquisition of DataCash, a UK-based payment processing and fraud/risk management provider. In March 2012, Mastercard announced the expansion of its mobile contactless payments program, including markets across the Middle East. In spring 2014, Mastercard acquired Australia's leading rewards program manager company Pinpoint for an undisclosed amount. In July 2016, Mastercard acquired 92.4% of VocaLink, a British company, for $920 million. In August 2017, Mastercard acquired Brighterion, a company with a portfolio of intellectual property in the areas of artificial intelligence and machine learning. Brighterion holds several patents. In August 2019, Mastercard launched an offer to acquire part of the operations of Nets, a Scandinavian company, for €2.85 billion. In April 2021, Mastercard created a calculator that gathers information and measures the carbon footprints of the customers in order to help them know how much they are contributing in carbon emissions and global warming. The same month, Mastercard announced the acquisition of Ekata, a company specializing in identity verification, for $850 million. In March 2022, following the Russian invasion of Ukraine, Mastercard announced that it would suspend all business operations in Russia. On November 17, 2023, the Chinese government approved the local bank card clearing license for the joint venture established by Mastercard in China. As of May 9, 2024, the joint venture can issue Mastercard bank cards that use the Chinese yuan for payment. In September 2024, Mastercard acquired cybersecurity company Recorded Future for $2.65 billion. In January 2026, Mastercard launched an Agentic AI suite called the Mastercard Agent Suite to help banks, retailers and other enterprises build, test, and deploy autonomous AI-driven workflows and agents for business operations. It is expected to be available in the second quarter of 2026. In mid-2025, Mastercard formalized the rollout of biometric-enabled metal credit cards built on the IDEX Pay platform. In February 2025, South Korean manufacturer KONA I received a Letter of Approval from Mastercard to produce both biometric plastic (PVC) and metal cards. These cards embed a fingerprint sensor within the card’s secure chip, enabling cardholder authentication directly on the card for in‑store EMV PIN‑free transactions, while ensuring biometric data never leaves the card. In July 2025, Eastern Bank PLC (EBL) in Bangladesh, in partnership with Mastercard, unveiled the world’s first commercially issued biometric metal credit card under the “World Elite” tier. The launch took place on 5 July 2025 in Dhaka, co-sponsored by IDEX Biometrics, KONA I, and Infineon Technologies. The card allows fingerprint authentication for EMV transactions, supports contact and contactless payments, and is issued as part of Mastercard’s Priceless Specials program. Mastercard’s global FAQ indicates biometric payment cards have been issued in over 70 markets, integrate with standard EMV terminals and ATMs, require no new infrastructure, and support user self‑enrollment kits. As of 2024, Mastercard ranked 164 on the Fortune 500 list of the largest United States corporations by revenue. Market power Operating a payment processing network entails risk of engaging in anticompetitive practices due to the many parties involved (that is, the customer and their bank and the merchant and their bank). Few companies have faced more antitrust lawsuits both in the US and abroad. Mastercard, along with Visa, engaged in systematic parallel exclusion against American Express during the 1980s and 1990s. Mastercard used exclusivity clauses in its contracts and blacklists to prevent banks from doing business with American Express. Such exclusionary clauses and other written evidence were used by the United States Department of Justice in regulatory actions against Mastercard and Visa. Discover has sued Mastercard for similar issues. Both Mastercard and Visa have paid approximately $3 billion in damages resulting from a class-action lawsuit filed in January 1996 for debit card swipe fee price fixing. The litigation cites several retail giants as plaintiffs, including Wal-Mart, Sears, Roebuck & Co., and Safeway. In 1996, four million merchants sued Mastercard in federal court for making them accept debit cards if they wanted to accept credit cards and dramatically increasing credit card swipe fees. This case was settled with a multibillion-dollar payment in 2003. This was the largest antitrust award in history. In 1998, the Department of Justice sued Mastercard over rules prohibiting their issuing banks from doing business with American Express or Discover. The Department of Justice won in 2001 and the verdict withstood appeal. American Express also filed suit. On August 23, 2001, Mastercard International Inc. was sued for violating the Florida Deceptive and Unfair Trade Practices Act. On November 15, 2004, Mastercard Inc. paid damages to American Express, due to anticompetitive practices that prevented American Express from issuing cards through U.S. banks, and paid $1.8 billion for settlement. On November 27, 2012, a federal judge entered an order granting preliminary approval to a proposed settlement to a class-action lawsuit filed in 2005 by merchants and trade associations against Mastercard and Visa. The suit was filed due to alleged price-fixing practices employed by Mastercard and Visa. About one-fourth of the named class plaintiffs have decided to opt-out of the settlement. Opponents object to provisions that would bar future lawsuits and prevent merchants from opting out of significant portions of the proposed settlement. Plaintiffs allege that Visa Inc. and Mastercard fixed interchange fees, also known as swipe fees, that are charged to merchants for the privilege of accepting payment cards. In their complaint, the plaintiffs also alleged that the defendants unfairly interfere with merchants from encouraging customers to use less expensive forms of payment such as lower-cost cards, cash, and checks. A settlement of $6.24 billion got preliminary approval in November 2019. A settlement of $5.54B was approved in 2019. Certain merchants appealed the settlement and were heard. The case is ongoing as of October 2022[update]. In October 2010, Mastercard and Visa reached a settlement with the U.S. Justice Department in another antitrust case. The companies agreed to allow merchants displaying their logos to decline certain types of cards (because interchange fees differ), or to offer consumers discounts for using cheaper cards. Mastercard, along with Visa, has been sued in a class action by ATM operators that claim the credit card networks' rules effectively fix ATM access fees. The suit claims that this is a restraint of trade in violation of federal law. The lawsuit was filed by the National ATM Council and independent operators of automated teller machines. More specifically, it is alleged that Mastercard's and Visa's network rules prohibit ATM operators from offering lower prices for transactions over PIN-debit networks that are not affiliated with Visa or Mastercard. The suit says that this price-fixing artificially raises the price that consumers pay using ATMs, limits the revenue that ATM operators earn, and violates the Sherman Act's prohibition against unreasonable restraints of trade. Johnathan Rubin, an attorney for the plaintiffs said, "Visa and Mastercard are the ringleaders, organizers, and enforcers of a conspiracy among U.S. banks to fix the price of ATM access fees in order to keep the competition at bay." In October 2025, merchants agreed to a $231.7 million settlement before a U.S. District Judge, as B & R Supermarket, Inc., et al v. Visa, Inc. et al, for costs imposed in frauds related to counterfeit, lost, or stolen cards, with Mastercard paying $79.8 million of the settlement. In 2003, the Reserve Bank of Australia required that interchange fees be dramatically reduced, from about 0.95% of the transaction to approximately 0.5%.[citation needed] One notable result has been the reduced use of reward cards and increased use of debit cards. Australia also prohibited the no surcharge rule, a policy established by credit card networks like Visa and Mastercard to prevent merchants from charging a credit card usage fee to the cardholder. A surcharge would mitigate or even exceed the merchant discount paid by a merchant, but would also make the cardholder more reluctant to use the card as the method of payment. Australia has also made changes to the interchange rates on debit cards and has considered abolishing interchange fees altogether. As of November 2006, New Zealand was considering similar actions, following a Commerce Commission lawsuit alleging price-fixing by Visa and Mastercard. In New Zealand, merchants pay a 1.8% fee on every credit card transaction.[citation needed] The European Union has repeatedly criticized Mastercard for monopolistic trade practices. In April 2009, Mastercard reached a settlement with the European Union in an antitrust case, promising to reduce debit card swipe fees to 0.2 percent of purchases. In December 2010, a senior official from the European Central Bank called for a break-up of the Visa/Mastercard duopoly by the creation of a new European debit card for use in the Single Euro Payments Area (SEPA). WikiLeaks published documents showing that American authorities lobbied Russia to defend the interests of Visa and Mastercard. In response, Mastercard blocked payments to WikiLeaks. Members of the European Parliament expressed concern that payments from European citizens to a European corporation could apparently be blocked by the United States and called for a further reduction in the dominance of Visa and Mastercard in the European payment system. In 2013, Mastercard was under investigation by the European Union for the high fees it charged merchants to accept cards issued outside the EU, compared to cards issued in the EU, as well as other anti-competitive practices that could hinder electronic commerce and international trade, and high fees associated with premium credit cards. The EU's competition regulator said that these fees were of special concern because of the growing role of non-cash payments. Mastercard was banned from charging fees on cross-border transactions conducted wholly within the EU via a ruling by the European Commission in 2007. The European Commission said that their investigation also included large differences in fees across national borders. For instance, a €50 payment might cost €0.10 in the Netherlands but eight times that amount in Poland. The Commission argues that Mastercard rules that prohibit merchants from enjoying better terms offered in other EU countries may be against antitrust law. The European Consumer Organisation (BEUC) praised the action against Mastercard. BEUC said interbank fees push up prices and hurt consumers. BEUC Director General Monique Goyens said, "So in the end, all consumers are hit by a scheme which ultimately rewards the card company and issuing bank." In January 2019, the European Commission imposed an antitrust fine of €570,566,000 on Mastercard for "obstructing merchants' access to cross-border card payment services", due to Mastercard's rules obliging acquiring banks to apply the interchange fees of the country where a retailer was located. The Commission concluded that Mastercard's rules prevented retailers from benefitting from lower fees and restricted competition between banks cross border, in breach of EU antitrust rules. The infringement of antitrust rules ended when Mastercard amended its rules due to the entering into force of the Interchange Fee Regulation in 2015, which introduced caps on interchange fees. The Commission did grant Mastercard a 10% reduction of the fine however, in return for Mastercard acknowledging the facts and cooperating with the antitrust investigation. In February 2021, following an investigation by the British Payment Systems Regulator, Mastercard admitted liability for breaching competition rules in relation to pre-paid cards. In November 2024, the European Commission launched a further investigation into whether the scheme fees imposed by Visa and Mastercard impact negatively on retailers. Some retailers had in recent years complained about the fees, citing a lack of transparency. The Commission took its investigation further in June 2025, asking for a retailer view and for comments from the card operators about whether "a standardized summary of fees" would help to promote transparency. In November 2025, Mastercard announced its expansion in Europe with the creation of new hubs in Warsaw and Gdańsk, Poland. Other issues Mastercard, Visa, and other credit cards have been used to fund accounts since online gambling began in the mid-1990s. On March 20, 2000, the United States District Court for the Eastern District of Louisiana, reviewed motions in Re: MasterCard International Inc. regarding multi-district litigation alleging Mastercard illegally interacted with a number of internet casinos. The plaintiffs alleged, among other claims, that Mastercard had violated the Federal Wire Act. They sought financial relief for losses suffered at online gambling sites outside the United States. The District Court's ruling on February 23, 2001, later upheld by the United States Court of Appeals for the Fifth Circuit, sided with Mastercard. The Fifth Circuit also clarified the application of the Wire Act to illegal online gambling. The Court determined that the wire act only applied to gambling activities related to a "sporting event or contest". Therefore, the court could not conclude that Mastercard had violated the Wire Act. When PASPA was overturned May 14, 2018, Mastercard had to provide new guidance to its member banks. It clarified that state location restrictions apply to the individual placing the wager, not the member bank processing the transaction. According to various state gaming laws, sports betting providers must use Internet geolocation to determine a customer's physical location prior to accepting a wager. The Independent Community Bankers of America specifically requested information about a new online gambling merchant category code. Mastercard has dedicated MCC 7801 to online gambling. This code is distinct from 7800 for government owned lotteries and 7802 for government licensed horse and dog tracks. In December 2010, Mastercard blocked all payments to whistleblowing platform WikiLeaks due to claims that they engage in illegal activity. In response, a group of online activists Anonymous organized a denial-of-service attack; as a result, the Mastercard website experienced downtime on December 8–9, 2010. On December 9, 2010, the servers of Mastercard underwent a massive attack as part of an Operation Avenge Assange for closing down payments to WikiLeaks. The security of thousands of credit cards was compromised during that attack due to a phishing-site set up by the attackers. However, Mastercard denied this, stating that account data had "not been placed at risk". WikiLeaks' spokesman said, "We neither condemn nor applaud these attacks." U.N. High Commissioner for Human Rights Navi Pillay said that closing down credit lines for donations to WikiLeaks "could be interpreted as an attempt to censor the publication of information, thus potentially violating WikiLeaks' right to freedom of expression". In July 2011, Iceland-based IT firm DataCell, the company that enabled WikiLeaks to accept credit and debit card donations, said it would take legal action against Visa Europe and Mastercard, and that it would move immediately to try to force the two companies to resume allowing payments to the website. Earlier on December 8, 2010, DataCell's CEO Andreas Fink had stated that "suspension of payments towards WikiLeaks is a violation of the agreements with their customers." On July 14, 2011, DataCell announced they had filed a complaint with the European Commission claiming the closure by Visa and Mastercard of Datacell's access to the payment card networks violated the competition rules of the European Community. On July 12, 2012, a Reykjavík court ruled that Valitor, Visa and Mastercard's partner in Iceland, had to start processing donations within fourteen days or pay daily fines to the amount of ISK 800,000 (some $6000) for each day after that time, to open the payment gateway. Valitor also had to pay DataCell's litigation costs of ISK 1,500,000. In 2014, pursuant to an agreement between Mastercard and the Nigerian Government, acting through the National Identity Management Commission, the new Nigerian ID cards bear the Mastercard logo, contain personal database data and double as payment cards, irrevocably linking such payments to the individuals, sparking criticism by the Civil Rights Congress alleging that it "represents a stamped ownership of a Nigerian by an American company ... reminiscent of the logo pasted on the bodies of African slaves transported across the Atlantic." In 2018, Bloomberg News reported that Google had paid millions of dollars to Mastercard for its users' credit card data for advertising purposes. The deal had not been publicly announced. On July 14, 2021, the Reserve Bank of India (RBI) indefinitely barred Mastercard from issuing new debit or credit cards to domestic Indian customers starting July 22, 2021, for violating data localization and storage rules as set by RBI on April 6, 2018, under Payment and Settlement Systems Act, 2007 (PSS Act). This ban does not affect cards already issued and working in India. Mastercard is the third major payment systems provider to be restricted in India after American Express and Diners Club International. On June 16, 2022, the business restrictions imposed were lifted by RBI with immediate effect. In 2018, the State Bank of Vietnam (SBV) requested that banks temporarily halt the issuance of new Mastercard cards due to violations of international payment regulations. Specifically, Mastercard did not comply with the regulation of conducting payments through the National Payment Corporation of Vietnam (NAPAS). In 2020, the ban was lifted after Mastercard committed to complying with Vietnamese regulations. Despite its widespread acceptance, using Mastercard in Vietnam still comes with certain limitations: Some Vietnamese websites and applications do not accept Mastercard as a payment method. Not all ATMs in Vietnam allow cash withdrawals using Mastercard. Some Mastercard users in Vietnam have reported experiencing inadequate customer service. In December 2020, Mastercard barred the use of its credit cards on Pornhub, an online pornography site. In April 2023, The Hill reported on an update to Mastercard's policy for adult content that would require sellers to have age and identity verifications and content review prior to posting in place. The new policies took effect in October 2021. On August 30, 2023, the American Civil Liberties Union, in combination with a coalition of other organizations, filed a complaint with the Federal Trade Commission requesting an investigation into the policy as an unfair business practice under Section 5 of the FTC Act. Mastercard faced further backlash in 2025 for pressuring video game digital distribution sites such as Steam and Itch.io into cracking down on adult content following pressure from activist group Collective Shout. Mastercard initially denied that it was doing so, but Valve, the developer of Steam, said that the company "specifically cited" Mastercard rule 5.12.7, which prohibits "any Transaction that is illegal, or in the sole discretion of the Corporation, may damage the goodwill of the Corporation or reflect negatively on the Marks," including that which is "patently offensive and lacks serious artistic value... or any other material that the Corporation deems unacceptable to sell in connection with a Mark". Products Depending on the geographical location, Mastercard issuers can issue cards in tiers, from the lowest to the highest: Credit cards: Debit cards: Through a partnership with an Internet company that specializes in personalized shopping, Mastercard introduced a Web shopping mall on April 16, 2010, that it said can pinpoint with considerable accuracy what its cardholders are likely to purchase. In September 2014 Mastercard worked with Apple to incorporate a new mobile wallet feature into Apple's new iPhone and Apple Watch models known as Apple Pay, enabling users to more readily use their Mastercard, and other credit cards. In May 2020, Mastercard announced the Mastercard Track Business Payment Service. The service will provide business-to-business payments between buyers and suppliers. According to the head of global commercial products, it "creates a directory of suppliers, enabling suppliers to publish their payment rules so they can better control how they receive payments while making it easier for buyers to find suppliers and understand their requirements". On February 10, 2021, Mastercard announced their support of cryptocurrencies saying that later in 2021, Mastercard will start supporting select cryptocurrencies directly on their network. One of the main focus areas that Mastercard wants to support is using digital assets for payments, and that crypto assets will need to offer the stability people need in a vehicle for spending, not investment. In October 2021, Mastercard announced that through its partnership with Bakkt, any bank or merchant on its network would soon be able to offer crypto services. In June 2022, Mastercard announced that it would now be allowing cardholders to purchase NFTs via various NFT scaling platforms. Mastercard, Comerica Bank, and the U.S. Treasury Department teamed up in 2008 to create the Direct Express Debit Mastercard. The federal government uses the Express Debit product to issue electronic payments to people who do not have bank accounts. Comerica Bank is the issuing bank for the debit card. The Direct Express cards give recipients a number of consumer protections. In June 2013, Mastercard announced a partnership with British Airways to offer members the Executive Club Multi-currency Cash Passport, which will allow members to earn extra points and make multi-currency payments. The Passport card allows users to load up to ten currencies (euro, pound, U.S. dollar, Turkish lira, Swiss franc, Australian dollar, Canadian dollar, New Zealand dollar, U.A.E. dirham, and South African rand) at a locked-in rate. When used, the card selects the local currency to ensure the best exchange rate, and if the local currency is not already loaded onto the card, funds are used from other currencies.[citation needed] QkR is a mobile payment app developed by Mastercard operating in the US and Australia for the purpose of ordering products and services through a smartphone with payments charged to the associated credit card. It is being deployed for use in large-scale events, such as sport events, concerts, movie theaters or schools. Unlike other Mastercard mobile payment apps such as Pay Pass, QkR does not use NFC from the phone, but rather an Internet connection. Users can open the app, scan a QR code located on the back of the seat in front of them, and place orders for refreshments of their choice. The order is dispatched to a nearby concession stand. QkR is being marketed to vendors as a replacement for other mobile payment apps and a mobile ordering app, either distributed by the vendor (such as Starbucks's app, McDonald's' app, or Chipotle's mobile ordering app) or by a third party, such as Square, headed by Twitter cofounder Jack Dorsey. Mastercard Contactless (formerly PayPass) is an EMV-compatible, contactless payment feature similar to American Express' ExpressPay, and Visa payWave. All three use the same symbol as shown on the right. It is based on the ISO/IEC 14443 standard that provides cardholders with a simpler way to pay by tapping a payment card or other payment device, such as a phone or key fob, on a point-of-sale terminal reader rather than swiping or inserting a card. Contactless can currently be used on transactions up to and including 100 GBP, 50 EUR, 60 BAM, 80 CHF, 50 USD, 100 CAD, 200 SEK, 500 NOK, 100 PLN, 350 DKK, 80 NZD, 100 AUD, 1000 RUB, 500 UAH, 500 TRY depending on the card's currency rather than the transaction currency or 5000 INR. In 2003, Mastercard concluded a nine-month PayPass market trial in Orlando, Florida, with JPMorgan Chase, Citibank, and MBNA. More than 16,000 cardholders and more than 60 retailer locations participated in the market trial.[needs update] In addition, Mastercard worked with Nokia and the Nokia 6131, AT&T Wireless, and JPMorgan Chase to incorporate Mastercard PayPass into mobile phones using near-field communication technology, in Dallas, Texas. In 2011, Google and Mastercard launched Google Wallet, an Android application which allows a mobile device to send credit/debit card information directly to a PayPass-enabled payment terminal, bypassing the need for a physical card, up until the creation of Google Pay. In 2014, the Apple released Apple Pay for iOS devices. During late 2015, Citicards in the US stopped issuing PayPass-enabled plastic, but the keyfob was still available upon request. Effective July 16, 2016, Citicards stopped supporting PayPass completely. While existing plastic and keyfobs continued to work until their expiration date, no new PayPass-enabled hardware was issued to US customers after that date. In April 2023, Mastercard announced its intention to expand its partnerships with cryptocurrency firms. At the time of the announcement, the firm had already partnered with other financial companies to offer cards linked to crypto in some nations. This was despite an increasingly intense regulatory environment, and it followed rival company Visa stopping its agreement with FTX in November 2022. The company said its Mastercard Crypto Credential service would allow for transactions between countries that met requirements like so-called "travel rule" by the Financial Action Task Force (FATF), using technology from CipherTrace. It also worked with wallet providers Bit2Me, Lirium, Mercado Bitcoin, and Uphold. Its head of crypto and blockchain, Raj Dhamodharan, said uses for NFT transactions would come later on. Branding Antitrust litigation over the years has damaged the brand. Mastercard's current advertising campaign tagline is Priceless. It started in 1997. The slogan associated with the campaign is There are some things money can't buy. For everything else, there's Mastercard. The Priceless campaign in more recent iterations has been applicable to both Mastercard's credit card and debit card products. They also use the Priceless description to promote products such as their priceless travel site, which features deals and offers for Mastercard holders, and priceless cities, offers for people in specified locations. In mid-2006, MasterCard International changed its name to MasterCard Worldwide. This was to suggest a more global scale. In addition, the company introduced a new corporate logo adding a third circle to the two that had been used in the past (the familiar card logo, resembling a Venn diagram, remained unchanged). A new corporate tagline was introduced at the same time—The Heart of Commerce. In July 2016, Mastercard introduced their new rebranding, along with a new corporate logo. In addition, they changed their service name from "MasterCard" to "mastercard". In January 2019, Mastercard removed its name from its logo, leaving just the overlapping discs. In 2021, Mastercard was ranked number 13 on Morning Consult's list of most trusted brands. Mastercard sponsors major sporting events and teams throughout the world. These include rugby's New Zealand, the MLB, the UEFA Champions League and the PGA Tour's Arnold Palmer Invitational. Previously, it also sponsored the FIFA World Cup but withdrew its contract after a court settlement and its rival, Visa, took up the contract in 2007. It also partners the Brazil national football team and the Copa Libertadores. In 1997, Mastercard was the main sponsor of the Mastercard Lola Formula One team, which withdrew from the 1997 Formula One season due to financial problems having failed to qualify its first race. The team also sponsored Jordan Grand Prix from the 1998 season through to end of the 2001 Formula One season. In July 2024, Mastercard returned to Formula One after signing a multi-year sponsorship deal with McLaren Racing. In August 2025, McLaren Racing announced Mastercard as the team's official naming partner. The team will enter the 2026 season onwards as the McLaren Mastercard Formula 1 Team. Mastercard was also the title sponsor for the Alamo Bowl game from 2002 until 2005. In late 2018, Mastercard became the first major sponsor for League of Legends esports. The company sponsors the League of Legends World Championship, Mid-Season Invitational, and the All-stars event for League of Legends. Until 2018, Mastercard was the sponsor of the Memorial Cup, the CHL's annual championship between its three leagues. In September 2022, Mastercard acquired the title sponsorship rights for all international and domestic home matches organized by the Board of Cricket Control in India. Corporate affairs Mastercard has its headquarters in the Mastercard International Global Headquarters in Purchase, New York. The Global Operations Center is located in O'Fallon, a suburb of St. Louis, Missouri. Mastercard was listed as one of the best companies to work for in 2013 by Forbes. In 2016, Mastercard UK became one of 144 companies who signed the HM Treasury's Women in Finance Charter, a pledge for balanced gender representation in the company. Key executives include: Prior to its IPO in 2006, Mastercard was an association that had a board of directors composed of banks. The current board of directors includes the following individuals: World Beyond Cash In 2017, CEO Ajay Banga reinforced the company's goal of extending financial services to those outside the current system by bringing digital payment systems to the unbanked around the world. The company invested $500M in India with offices in Pune and Vadodara to help Mastercard bring cashless transactions to the largest population in the world. The company also is scheduled to invest an additional $750M in cashless apps and technology, especially focused on India between 2017 and 2020. Banknet Mastercard operates Banknet, a global telecommunications network linking all Mastercard card issuers, acquirers, and data processing centers into a single financial network. The operations hub is located in St. Louis, Missouri. Banknet uses the ISO 8583 protocol. Mastercard's network differs significantly from Visa's. Visa's is a star-based system where all endpoints terminate at one of several main data centers, where all transactions are processed centrally. Mastercard's network is an edge-based, peer-to-peer network where transactions travel a meshed network directly to other endpoints, without the need to travel to a single point. This allows Mastercard's network to be much more resilient, in that a single failure cannot isolate a large number of endpoints. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Social_network#cite_note-67] | [TOKENS: 5247]
Contents Social network 1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias A social network is a social structure consisting of a set of social actors (such as individuals or organizations), networks of dyadic ties, and other social interactions between actors. The social network perspective provides a set of methods for analyzing the structure of whole social entities along with a variety of theories explaining the patterns observed in these structures. The study of these structures uses social network analysis to identify local and global patterns, locate influential entities, and examine dynamics of networks. For instance, social network analysis has been used in studying the spread of misinformation on social media platforms or analyzing the influence of key figures in social networks. Social networks and the analysis of them is an inherently interdisciplinary academic field which emerged from social psychology, sociology, statistics, and graph theory. Georg Simmel authored early structural theories in sociology emphasizing the dynamics of triads and "web of group affiliations". Jacob Moreno is credited with developing the first sociograms in the 1930s to study interpersonal relationships. These approaches were mathematically formalized in the 1950s and theories and methods of social networks became pervasive in the social and behavioral sciences by the 1980s. Social network analysis is now one of the major paradigms in contemporary sociology, and is also employed in a number of other social and formal sciences. Together with other complex networks, it forms part of the nascent field of network science. Overview The social network is a theoretical construct useful in the social sciences to study relationships between individuals, groups, organizations, or even entire societies (social units, see differentiation). The term is used to describe a social structure determined by such interactions. The ties through which any given social unit connects represent the convergence of the various social contacts of that unit. This theoretical approach is, necessarily, relational. An axiom of the social network approach to understanding social interaction is that social phenomena should be primarily conceived and investigated through the properties of relations between and within units, instead of the properties of these units themselves. Thus, one common criticism of social network theory is that individual agency is often ignored although this may not be the case in practice (see agent-based modeling). Precisely because many different types of relations, singular or in combination, form these network configurations, network analytics are useful to a broad range of research enterprises. In social science, these fields of study include, but are not limited to anthropology, biology, communication studies, economics, geography, information science, organizational studies, social psychology, sociology, and sociolinguistics. History In the late 1890s, both Émile Durkheim and Ferdinand Tönnies foreshadowed the idea of social networks in their theories and research of social groups. Tönnies argued that social groups can exist as personal and direct social ties that either link individuals who share values and belief (Gemeinschaft, German, commonly translated as "community") or impersonal, formal, and instrumental social links (Gesellschaft, German, commonly translated as "society"). Durkheim gave a non-individualistic explanation of social facts, arguing that social phenomena arise when interacting individuals constitute a reality that can no longer be accounted for in terms of the properties of individual actors. Georg Simmel, writing at the turn of the twentieth century, pointed to the nature of networks and the effect of network size on interaction and examined the likelihood of interaction in loosely knit networks rather than groups. Major developments in the field can be seen in the 1930s by several groups in psychology, anthropology, and mathematics working independently. In psychology, in the 1930s, Jacob L. Moreno began systematic recording and analysis of social interaction in small groups, especially classrooms and work groups (see sociometry). In anthropology, the foundation for social network theory is the theoretical and ethnographic work of Bronislaw Malinowski, Alfred Radcliffe-Brown, and Claude Lévi-Strauss. A group of social anthropologists associated with Max Gluckman and the Manchester School, including John A. Barnes, J. Clyde Mitchell and Elizabeth Bott Spillius, often are credited with performing some of the first fieldwork from which network analyses were performed, investigating community networks in southern Africa, India and the United Kingdom. Concomitantly, British anthropologist S. F. Nadel codified a theory of social structure that was influential in later network analysis. In sociology, the early (1930s) work of Talcott Parsons set the stage for taking a relational approach to understanding social structure. Later, drawing upon Parsons' theory, the work of sociologist Peter Blau provides a strong impetus for analyzing the relational ties of social units with his work on social exchange theory. By the 1970s, a growing number of scholars worked to combine the different tracks and traditions. One group consisted of sociologist Harrison White and his students at the Harvard University Department of Social Relations. Also independently active in the Harvard Social Relations department at the time were Charles Tilly, who focused on networks in political and community sociology and social movements, and Stanley Milgram, who developed the "six degrees of separation" thesis. Mark Granovetter and Barry Wellman are among the former students of White who elaborated and championed the analysis of social networks. Beginning in the late 1990s, social network analysis experienced work by sociologists, political scientists, and physicists such as Duncan J. Watts, Albert-László Barabási, Peter Bearman, Nicholas A. Christakis, James H. Fowler, and others, developing and applying new models and methods to emerging data available about online social networks, as well as "digital traces" regarding face-to-face networks. Levels of analysis In general, social networks are self-organizing, emergent, and complex, such that a globally coherent pattern appears from the local interaction of the elements that make up the system. These patterns become more apparent as network size increases. However, a global network analysis of, for example, all interpersonal relationships in the world is not feasible and is likely to contain so much information as to be uninformative. Practical limitations of computing power, ethics and participant recruitment and payment also limit the scope of a social network analysis. The nuances of a local system may be lost in a large network analysis, hence the quality of information may be more important than its scale for understanding network properties. Thus, social networks are analyzed at the scale relevant to the researcher's theoretical question. Although levels of analysis are not necessarily mutually exclusive, there are three general levels into which networks may fall: micro-level, meso-level, and macro-level. At the micro-level, social network research typically begins with an individual, snowballing as social relationships are traced, or may begin with a small group of individuals in a particular social context. Dyadic level: A dyad is a social relationship between two individuals. Network research on dyads may concentrate on structure of the relationship (e.g. multiplexity, strength), social equality, and tendencies toward reciprocity/mutuality. Triadic level: Add one individual to a dyad, and you have a triad. Research at this level may concentrate on factors such as balance and transitivity, as well as social equality and tendencies toward reciprocity/mutuality. In the balance theory of Fritz Heider the triad is the key to social dynamics. The discord in a rivalrous love triangle is an example of an unbalanced triad, likely to change to a balanced triad by a change in one of the relations. The dynamics of social friendships in society has been modeled by balancing triads. The study is carried forward with the theory of signed graphs. Actor level: The smallest unit of analysis in a social network is an individual in their social setting, i.e., an "actor" or "ego." Egonetwork analysis focuses on network characteristics, such as size, relationship strength, density, centrality, prestige and roles such as isolates, liaisons, and bridges. Such analyses, are most commonly used in the fields of psychology or social psychology, ethnographic kinship analysis or other genealogical studies of relationships between individuals. Subset level: Subset levels of network research problems begin at the micro-level, but may cross over into the meso-level of analysis. Subset level research may focus on distance and reachability, cliques, cohesive subgroups, or other group actions or behavior. In general, meso-level theories begin with a population size that falls between the micro- and macro-levels. However, meso-level may also refer to analyses that are specifically designed to reveal connections between micro- and macro-levels. Meso-level networks are low density and may exhibit causal processes distinct from interpersonal micro-level networks. Organizations: Formal organizations are social groups that distribute tasks for a collective goal. Network research on organizations may focus on either intra-organizational or inter-organizational ties in terms of formal or informal relationships. Intra-organizational networks themselves often contain multiple levels of analysis, especially in larger organizations with multiple branches, franchises or semi-autonomous departments. In these cases, research is often conducted at a work group level and organization level, focusing on the interplay between the two structures. Experiments with networked groups online have documented ways to optimize group-level coordination through diverse interventions, including the addition of autonomous agents to the groups. Randomly distributed networks: Exponential random graph models of social networks became state-of-the-art methods of social network analysis in the 1980s. This framework has the capacity to represent social-structural effects commonly observed in many human social networks, including general degree-based structural effects commonly observed in many human social networks as well as reciprocity and transitivity, and at the node-level, homophily and attribute-based activity and popularity effects, as derived from explicit hypotheses about dependencies among network ties. Parameters are given in terms of the prevalence of small subgraph configurations in the network and can be interpreted as describing the combinations of local social processes from which a given network emerges. These probability models for networks on a given set of actors allow generalization beyond the restrictive dyadic independence assumption of micro-networks, allowing models to be built from theoretical structural foundations of social behavior. Scale-free networks: A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. In network theory a scale-free ideal network is a random network with a degree distribution that unravels the size distribution of social groups. Specific characteristics of scale-free networks vary with the theories and analytical tools used to create them, however, in general, scale-free networks have some common characteristics. One notable characteristic in a scale-free network is the relative commonness of vertices with a degree that greatly exceeds the average. The highest-degree nodes are often called "hubs", and may serve specific purposes in their networks, although this depends greatly on the social context. Another general characteristic of scale-free networks is the clustering coefficient distribution, which decreases as the node degree increases. This distribution also follows a power law. The Barabási model of network evolution shown above is an example of a scale-free network. Rather than tracing interpersonal interactions, macro-level analyses generally trace the outcomes of interactions, such as economic or other resource transfer interactions over a large population. Large-scale networks: Large-scale network is a term somewhat synonymous with "macro-level." It is primarily used in social and behavioral sciences, and in economics. Originally, the term was used extensively in the computer sciences (see large-scale network mapping). Complex networks: Most larger social networks display features of social complexity, which involves substantial non-trivial features of network topology, with patterns of complex connections between elements that are neither purely regular nor purely random (see, complexity science, dynamical system and chaos theory), as do biological, and technological networks. Such complex network features include a heavy tail in the degree distribution, a high clustering coefficient, assortativity or disassortativity among vertices, community structure (see stochastic block model), and hierarchical structure. In the case of agency-directed networks these features also include reciprocity, triad significance profile (TSP, see network motif), and other features. In contrast, many of the mathematical models of networks that have been studied in the past, such as lattices and random graphs, do not show these features. Theoretical links Various theoretical frameworks have been imported for the use of social network analysis. The most prominent of these are Graph theory, Balance theory, Social comparison theory, and more recently, the Social identity approach. Few complete theories have been produced from social network analysis. Two that have are structural role theory and heterophily theory. The basis of Heterophily Theory was the finding in one study that more numerous weak ties can be important in seeking information and innovation, as cliques have a tendency to have more homogeneous opinions as well as share many common traits. This homophilic tendency was the reason for the members of the cliques to be attracted together in the first place. However, being similar, each member of the clique would also know more or less what the other members knew. To find new information or insights, members of the clique will have to look beyond the clique to its other friends and acquaintances. This is what Granovetter called "the strength of weak ties". Structural holes In the context of networks, social capital exists where people have an advantage because of their location in a network. Contacts in a network provide information, opportunities and perspectives that can be beneficial to the central player in the network. Most social structures tend to be characterized by dense clusters of strong connections. Information within these clusters tends to be rather homogeneous and redundant. Non-redundant information is most often obtained through contacts in different clusters. When two separate clusters possess non-redundant information, there is said to be a structural hole between them. Thus, a network that bridges structural holes will provide network benefits that are in some degree additive, rather than overlapping. An ideal network structure has a vine and cluster structure, providing access to many different clusters and structural holes. Networks rich in structural holes are a form of social capital in that they offer information benefits. The main player in a network that bridges structural holes is able to access information from diverse sources and clusters. For example, in business networks, this is beneficial to an individual's career because he is more likely to hear of job openings and opportunities if his network spans a wide range of contacts in different industries/sectors. This concept is similar to Mark Granovetter's theory of weak ties, which rests on the basis that having a broad range of contacts is most effective for job attainment. Structural holes have been widely applied in social network analysis, resulting in applications in a wide range of practical scenarios as well as machine learning-based social prediction. Research clusters Research has used network analysis to examine networks created when artists are exhibited together in museum exhibition. Such networks have been shown to affect an artist's recognition in history and historical narratives, even when controlling for individual accomplishments of the artist. Other work examines how network grouping of artists can affect an individual artist's auction performance. An artist's status has been shown to increase when associated with higher status networks, though this association has diminishing returns over an artist's career. In J.A. Barnes' day, a "community" referred to a specific geographic location and studies of community ties had to do with who talked, associated, traded, and attended church with whom. Today, however, there are extended "online" communities developed through telecommunications devices and social network services. Such devices and services require extensive and ongoing maintenance and analysis, often using network science methods. Community development studies, today, also make extensive use of such methods. Complex networks require methods specific to modelling and interpreting social complexity and complex adaptive systems, including techniques of dynamic network analysis. Mechanisms such as Dual-phase evolution explain how temporal changes in connectivity contribute to the formation of structure in social networks. The study of social networks is being used to examine the nature of interdependencies between actors and the ways in which these are related to outcomes of conflict and cooperation. Areas of study include cooperative behavior among participants in collective actions such as protests; promotion of peaceful behavior, social norms, and public goods within communities through networks of informal governance; the role of social networks in both intrastate conflict and interstate conflict; and social networking among politicians, constituents, and bureaucrats. In criminology and urban sociology, much attention has been paid to the social networks among criminal actors. For example, murders can be seen as a series of exchanges between gangs. Murders can be seen to diffuse outwards from a single source, because weaker gangs cannot afford to kill members of stronger gangs in retaliation, but must commit other violent acts to maintain their reputation for strength. Diffusion of ideas and innovations studies focus on the spread and use of ideas from one actor to another or one culture and another. This line of research seeks to explain why some become "early adopters" of ideas and innovations, and links social network structure with facilitating or impeding the spread of an innovation. A case in point is the social diffusion of linguistic innovation such as neologisms. Experiments and large-scale field trials (e.g., by Nicholas Christakis and collaborators) have shown that cascades of desirable behaviors can be induced in social groups, in settings as diverse as Honduras villages, Indian slums, or in the lab. Still other experiments have documented the experimental induction of social contagion of voting behavior, emotions, risk perception, and commercial products. In demography, the study of social networks has led to new sampling methods for estimating and reaching populations that are hard to enumerate (for example, homeless people or intravenous drug users.) For example, respondent driven sampling is a network-based sampling technique that relies on respondents to a survey recommending further respondents. The field of sociology focuses almost entirely on networks of outcomes of social interactions. More narrowly, economic sociology considers behavioral interactions of individuals and groups through social capital and social "markets". Sociologists, such as Mark Granovetter, have developed core principles about the interactions of social structure, information, ability to punish or reward, and trust that frequently recur in their analyses of political, economic and other institutions. Granovetter examines how social structures and social networks can affect economic outcomes like hiring, price, productivity and innovation and describes sociologists' contributions to analyzing the impact of social structure and networks on the economy. Analysis of social networks is increasingly incorporated into health care analytics, not only in epidemiological studies but also in models of patient communication and education, disease prevention, mental health diagnosis and treatment, and in the study of health care organizations and systems. Human ecology is an interdisciplinary and transdisciplinary study of the relationship between humans and their natural, social, and built environments. The scientific philosophy of human ecology has a diffuse history with connections to geography, sociology, psychology, anthropology, zoology, and natural ecology. In the study of literary systems, network analysis has been applied by Anheier, Gerhards and Romo, De Nooy, Senekal, and Lotker, to study various aspects of how literature functions. The basic premise is that polysystem theory, which has been around since the writings of Even-Zohar, can be integrated with network theory and the relationships between different actors in the literary network, e.g. writers, critics, publishers, literary histories, etc., can be mapped using visualization from SNA. Research studies of formal or informal organization relationships, organizational communication, economics, economic sociology, and other resource transfers. Social networks have also been used to examine how organizations interact with each other, characterizing the many informal connections that link executives together, as well as associations and connections between individual employees at different organizations. Many organizational social network studies focus on teams. Within team network studies, research assesses, for example, the predictors and outcomes of centrality and power, density and centralization of team instrumental and expressive ties, and the role of between-team networks. Intra-organizational networks have been found to affect organizational commitment, organizational identification, interpersonal citizenship behaviour. Social capital is a form of economic and cultural capital in which social networks are central, transactions are marked by reciprocity, trust, and cooperation, and market agents produce goods and services not mainly for themselves, but for a common good. Social capital is split into three dimensions: the structural, the relational and the cognitive dimension. The structural dimension describes how partners interact with each other and which specific partners meet in a social network. Also, the structural dimension of social capital indicates the level of ties among organizations. This dimension is highly connected to the relational dimension which refers to trustworthiness, norms, expectations and identifications of the bonds between partners. The relational dimension explains the nature of these ties which is mainly illustrated by the level of trust accorded to the network of organizations. The cognitive dimension analyses the extent to which organizations share common goals and objectives as a result of their ties and interactions. Social capital is a sociological concept about the value of social relations and the role of cooperation and confidence to achieve positive outcomes. The term refers to the value one can get from their social ties. For example, newly arrived immigrants can make use of their social ties to established migrants to acquire jobs they may otherwise have trouble getting (e.g., because of unfamiliarity with the local language). A positive relationship exists between social capital and the intensity of social network use. In a dynamic framework, higher activity in a network feeds into higher social capital which itself encourages more activity. This particular cluster focuses on brand-image and promotional strategy effectiveness, taking into account the impact of customer participation on sales and brand-image. This is gauged through techniques such as sentiment analysis which rely on mathematical areas of study such as data mining and analytics. This area of research produces vast numbers of commercial applications as the main goal of any study is to understand consumer behaviour and drive sales. In many organizations, members tend to focus their activities inside their own groups, which stifles creativity and restricts opportunities. A player whose network bridges structural holes has an advantage in detecting and developing rewarding opportunities. Such a player can mobilize social capital by acting as a "broker" of information between two clusters that otherwise would not have been in contact, thus providing access to new ideas, opinions and opportunities. British philosopher and political economist John Stuart Mill, writes, "it is hardly possible to overrate the value of placing human beings in contact with persons dissimilar to themselves.... Such communication [is] one of the primary sources of progress." Thus, a player with a network rich in structural holes can add value to an organization through new ideas and opportunities. This in turn, helps an individual's career development and advancement. A social capital broker also reaps control benefits of being the facilitator of information flow between contacts. Full communication with exploratory mindsets and information exchange generated by dynamically alternating positions in a social network promotes creative and deep thinking. In the case of consulting firm Eden McCallum, the founders were able to advance their careers by bridging their connections with former big three consulting firm consultants and mid-size industry firms. By bridging structural holes and mobilizing social capital, players can advance their careers by executing new opportunities between contacts. There has been research that both substantiates and refutes the benefits of information brokerage. A study of high tech Chinese firms by Zhixing Xiao found that the control benefits of structural holes are "dissonant to the dominant firm-wide spirit of cooperation and the information benefits cannot materialize due to the communal sharing values" of such organizations. However, this study only analyzed Chinese firms, which tend to have strong communal sharing values. Information and control benefits of structural holes are still valuable in firms that are not quite as inclusive and cooperative on the firm-wide level. In 2004, Ronald Burt studied 673 managers who ran the supply chain for one of America's largest electronics companies. He found that managers who often discussed issues with other groups were better paid, received more positive job evaluations and were more likely to be promoted. Thus, bridging structural holes can be beneficial to an organization, and in turn, to an individual's career. Computer networks combined with social networking software produce a new medium for social interaction. A relationship over a computerized social networking service can be characterized by context, direction, and strength. The content of a relation refers to the resource that is exchanged. In a computer-mediated communication context, social pairs exchange different kinds of information, including sending a data file or a computer program as well as providing emotional support or arranging a meeting. With the rise of electronic commerce, information exchanged may also correspond to exchanges of money, goods or services in the "real" world. Social network analysis methods have become essential to examining these types of computer mediated communication. In addition, the sheer size and the volatile nature of social media has given rise to new network metrics. A key concern with networks extracted from social media is the lack of robustness of network metrics given missing data. Based on the pattern of homophily, ties between people are most likely to occur between nodes that are most similar to each other, or within neighbourhood segregation, individuals are most likely to inhabit the same regional areas as other individuals who are like them. Therefore, social networks can be used as a tool to measure the degree of segregation or homophily within a social network. Social Networks can both be used to simulate the process of homophily but it can also serve as a measure of level of exposure of different groups to each other within a current social network of individuals in a certain area. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/File:Namco-GunCon-PS1.jpg] | [TOKENS: 97]
File:Namco-GunCon-PS1.jpg Summary File history Click on a date/time to view the file as it appeared at that time. File usage The following 4 pages use this file: Global file usage The following other wikis use this file: Metadata This file contains additional information, probably added from the digital camera or scanner used to create or digitize it. If the file has been modified from its original state, some details may not fully reflect the modified file.
========================================
[SOURCE: https://en.wikipedia.org/wiki/Carpetbaggers] | [TOKENS: 6014]
Contents Carpetbagger In the history of the United States, carpetbagger is a largely historical pejorative used by Southerners to describe allegedly opportunistic or disruptive Northerners who came to the Southern states after the American Civil War and were perceived to be exploiting the local populace for their own financial, political, or social gain. The term broadly included both individuals who sought to promote Republican politics (including the right of African Americans to vote and hold office) and individuals who saw business and political opportunities because of the chaotic state of the local economies following the war. In practice, the term carpetbagger often was applied to any Northerners who were present in the South during the Reconstruction Era (1865–1877). The word is closely associated with scalawag, a similarly pejorative word used to describe white Southerners who supported the Republican Party-led Reconstruction. White Southerners commonly denounced carpetbaggers collectively during the post-war years, fearing they would loot and plunder the defeated South and be allied politically with the Radical Republicans. Sixty men from the North, including educated free blacks and slaves who had escaped to the North and returned South after the war, were elected from the South as Republicans to Congress. The majority of Republican governors in the South during Reconstruction were from the North. Since the end of the Reconstruction era, the term has been used to denote people who move into a new area for purely economic or political reasons despite not having ties to that place. Etymology and definition The term carpetbagger, used exclusively as a pejorative term, originated from the carpet bag, a form of cheap luggage, made from carpet fabric, which many of the newcomers carried. The term came to be associated with opportunism and exploitation by outsiders. It is now used in the United States to refer to a parachute candidate, that is, an outsider who runs for public office in an area without having lived there for more than a short time, or without having other significant community ties. According to a 1912 book by Oliver Temple Perry, Tennessee Secretary of State and Radical Republican Andrew J. Fletcher "was one of the first, if not the very first, in the State to denounce the hordes of greedy office-seekers who came from the North in the rear of the army in the closing days of the [U.S. Civil] War", in the June 1867 stump speech that he delivered across Tennessee in support of the re-election of the disabled Tennessee Governor William G. Brownlow: No one more gladly welcomes the Northern man who comes in all sincerity to make a home here, and to become one of our people, than I, but for the adventurer and the office-seeker who comes among us with one dirty shirt and a pair of dirty socks, in an old rusty carpet bag, and before his washing is done becomes a candidate for office, I have no welcome. That was the origin of the term "carpet bag", and out of it grew the well known term "carpet-bag government". In the United Kingdom at the end of the 20th century, carpetbagger developed another meaning, referring to people who joined a mutual organization, such as a building society, in order to force it to demutualize, that is, to convert into a joint stock company, seeking personal financial gain by that means. Background The Republican Party in the South comprised three groups after the Civil War, and white Democratic Southerners referred to two in derogatory terms. Scalawags were white Southerners who supported the Republican party, "carpetbaggers" were recent arrivals in the region from the North, and freedmen were freed slaves. Most of the 430 Republican newspapers in the South were edited by scalawags and 20 percent were edited by carpetbaggers. White businessmen generally boycotted Republican papers, which survived through government patronage. Historian Eric Foner argues: ...most carpetbaggers probably combine the desire for personal gain with a commitment to taking part in an effort "to substitute the civilization of freedom for that of slavery"...Carpetbaggers generally supported measures aimed at democratizing and modernizing the South – civil rights legislation, aid to economic development, the establishment of public school systems. Beginning in 1862, Northern abolitionists moved to areas in the South that had fallen under Union control. Schoolteachers and religious missionaries went to the South to teach the freedmen; some were sponsored by northern churches. Some were abolitionists who sought to continue the struggle for racial equality; they often became agents of the federal Freedmen's Bureau, which started operations in 1865 to assist the vast numbers of recently emancipated slaves. The bureau established schools in rural areas of the South for the purpose of educating the mostly illiterate Black and Poor White population. Other Northerners who moved to the South did so to participate in the profitable business of rebuilding railroads and various other forms of infrastructure that had been destroyed during the war. During the time most blacks were enslaved, many were prohibited from being educated and attaining literacy. Southern states had no public school systems, and upper-class white Southerners either sent their children to private schools (including in England) or hired private tutors. After the war, hundreds of Northern white women moved South, many to teach the newly freed African-American children. They joined like-minded Southerners, most of which were employed by the Methodist and Baptist Churches, who spent much of their time teaching and preaching to slave and freedpeople congregations both before and after the Civil War. Carpetbaggers also established banks and retail businesses. Most were former Union soldiers eager to invest their savings and energy in this promising new frontier, and civilians lured south by press reports of "the fabulous sums of money to be made in the South in raising cotton." Foner notes that "joined with the quest for profit, however, was a reforming spirit, a vision of themselves as agents of sectional reconciliation and the South's "economic regeneration." Accustomed to viewing Southerners—black and white—as devoid of economic initiative, the "Puritan work ethic", and self-discipline, they believed that only "Northern capital and energy" could bring "the blessings of a free labor system to the region." Carpetbaggers tended to be well educated and middle class in origin. Some had been lawyers, businessmen, and newspaper editors. The majority (including 52 of the 60 who served in Congress during Reconstruction) were veterans of the Union Army. Leading "black carpetbaggers" believed that the interests of capital and labor were identical and that the freedmen were entitled to little more than an "honest chance in the race of life." Many Northern and Southern Republicans shared a modernizing vision of upgrading the Southern economy and society, one that would replace the inefficient Southern plantation regime with railroads, factories, and more efficient farming. They actively promoted public schooling and created numerous colleges and universities. The Northerners were especially successful in taking control of Southern railroads, aided by state legislatures. In 1870, Northerners controlled 21% of the South's railroads (by mileage); 19% of the directors were from the North. By 1890, they controlled 88% of the mileage; 47% of the directors were from the North. Prominent examples in state politics Union General Adelbert Ames, a native of Maine, was appointed military governor and later was elected as Republican governor of Mississippi during the Reconstruction era. Ames tried unsuccessfully to ensure equal rights for black Mississippians. His political battles with the Southerners and African Americans ripped apart his party. The "Black and Tan" (biracial) constitutional convention in Mississippi in 1868 included 30 white Southerners, 17 Southern freedmen and 24 non-southerners, nearly all of whom were veterans of the Union Army. They included four men who had lived in the South before the war, two of whom had served in the Confederate States Army. Among the more prominent were Gen. Beroth B. Eggleston, a native of New York; Col. A.T. Morgan, of the Second Wisconsin Volunteers; Gen. W.S. Barry, former commander of a Colored regiment raised in Kentucky; an Illinois general and lawyer who graduated from Knox College; Maj. W.H. Gibbs, of the Fifteenth Illinois infantry; Judge W. B. Cunningham, of Pennsylvania; and Cap. E.J. Castello, of the Seventh Missouri infantry. They were among the founders of the Republican party in Mississippi. They were prominent in the politics of the state until 1875, but nearly all left Mississippi in 1875 to 1876 under pressure from the Red Shirts and White Liners. These white paramilitary organizations, described as "the military arm of the Democratic Party", worked openly to violently overthrow Republican rule, using intimidation and assassination to turn Republicans out of office and suppress freedmen's voting. Mississippi Representative Wiley P. Harris, a Democrat, stated in 1875: If any two hundred Southern men backed by a Federal administration should go to Indianapolis, turn out the Indiana people, take possession of all the seats of power, honor, and profit, denounce the people at large as assassins and barbarians, introduce corruption in all the branches of the public administration, make government a curse instead of a blessing, league with the most ignorant class of society to make war on the enlightened, intelligent, and virtuous, what kind of social relations would such a state of things beget. Albert T. Morgan, the Republican sheriff of Yazoo, Mississippi, received a brief flurry of national attention when insurgent white Democrats took over the county government and forced him to flee. He later wrote Yazoo; Or, on the Picket Line of Freedom in the South (1884). On November 6, 1875, Hiram Revels, a Mississippi Republican and the first African American U.S. Senator, wrote a letter to U.S. President Ulysses S. Grant that was widely reprinted. Revels denounced Ames and Northerners for manipulating the Black vote for personal benefit, and for keeping alive wartime hatreds: Since reconstruction, the masses of my people have been, as it were, enslaved in mind by unprincipled adventurers, who, caring nothing for country, were willing to stoop to anything no matter how infamous, to secure power to themselves, and perpetuate it...My people have been told by these schemers, when men have been placed on the ticket who were notoriously corrupt and dishonest, that they must vote for them; that the salvation of the party depended upon it; that the man who scratched a ticket was not a Republican. This is only one of the many means these unprincipled demagogues have devised to perpetuate the intellectual bondage of my people...The bitterness and hate created by the late civil strife has, in my opinion, been obliterated in this state, except perhaps in some localities, and would have long since been entirely obliterated, were it not for some unprincipled men who would keep alive the bitterness of the past, and inculcate a hatred between the races, in order that they may aggrandize themselves by office, and its emoluments, to control my people, the effect of which is to degrade them. Elza Jeffords, a lawyer from Portsmouth, Ohio who fought with the Army of the Tennessee, remained in Mississippi after the conclusion of the Civil War. He was the last Republican to represent that state in the U.S. House of Representatives, serving from 1883 to 1885. He died in Vicksburg, Mississippi 16 days after he left Congress. The next Republican congressman from the state was not elected until 80 years later in 1964: Prentiss Walker of Mize, Mississippi, who served a single term from 1965 to 1967. Corruption was a charge made by Democrats in North Carolina against the Republicans, notes the historian Paul Escott, "because its truth was apparent." The historians Eric Foner and W.E.B. Du Bois have noted that Democrats as well as Republicans received bribes and participated in decisions about the railroads. General Milton S. Littlefield was dubbed the "Prince of Carpetbaggers", and bought votes in the legislature "to support grandiose and fraudulent railroad schemes". Escott concludes that some Democrats were involved, but Republicans "bore the main responsibility for the issue of $28 million in state bonds for railroads and the accompanying corruption. This sum, enormous for the time, aroused great concern." Foner says Littlefield disbursed $200,000 (bribes) to win support in the legislature for state money for his railroads, and Democrats as well as Republicans were guilty of taking the bribes and making the decisions on the railroad. North Carolina Democrats condemned the legislature's "depraved villains, who take bribes every day"; one local Republican officeholder complained, "I deeply regret the course of some of our friends in the Legislature as well as out of it in regard to financial matters, it is very embarrassing indeed." Escott notes that extravagance and corruption increased taxes and the costs of government in a state that had always favored low expenditure. The context was that a planter elite kept taxes low because it benefited them. They used their money toward private ends rather than public investment. None of the states had established public school systems before the Reconstruction state legislatures created them, and they had systematically underinvested in infrastructure such as roads and railroads. Planters whose properties occupied prime riverfront locations relied on river transportation, but smaller farmers in the backcountry suffered. Escott claimed "Some money went to very worthy causes—the 1869 legislature, for example, passed a school law that began the rebuilding and expansion of the state's public schools. But far too much was wrongly or unwisely spent" to aid the Republican Party leadership. A Republican county commissioner in Alamance eloquently denounced the situation: "Men are placed in power who instead of carrying out their duties...form a kind of school for to graduate Rascals. Yes if you will give them a few Dollars they will liern you for an accomplished Rascal. This is in reference to the taxes that are rung from the labouring class of people. Without a speedy reformation I will have to resign my post." Albion W. Tourgée, formerly of Ohio and a friend of President James A. Garfield, moved to North Carolina, where he practiced as a lawyer and was appointed a judge. He once opined that "Jesus Christ was a carpetbagger." Tourgée later wrote A Fool's Errand, a largely autobiographical novel about an idealistic carpetbagger persecuted by the Ku Klux Klan in North Carolina. A politician in South Carolina who was called a carpetbagger was Daniel Henry Chamberlain, a New Englander who had served as an officer of a predominantly black regiment of the United States Colored Troops. He was appointed South Carolina's attorney general from 1868 to 1872 and elected Republican governor from 1874 to 1877. As a result of the national Compromise of 1877, Chamberlain lost his office. He was narrowly re-elected in a campaign marked by egregious voter fraud and violence against freedmen by Democratic Red Shirts, who succeeded in suppressing the black vote in some majority-black counties. While serving in South Carolina, Chamberlain was a strong supporter of Negro rights. Some historians of the early 1800s, who belonged to the Dunning School that believed that the Reconstruction era was fatally flawed, claimed that Chamberlain later was influenced by Social Darwinism to become a white supremacist. They also wrote that he supported states' rights and laissez-faire in the economy. They portrayed "liberty" in 1896 as the right to rise above the rising tide of equality. Chamberlain was said to justify white supremacy by arguing that, in evolutionary terms, the Negro obviously belonged to an inferior social order. Charles Woodward Stearns, also from Massachusetts, wrote an account of his experience in South Carolina: The Black Man of the South, and the Rebels: Or, the Characteristics of the Former and the Recent Outrages of the Latter (1873). Francis Lewis Cardozo, a black minister from New Haven, Connecticut, served as a delegate to South Carolina's 1868 Constitutional Convention. He made eloquent speeches advocating that the plantations be broken up and distributed among the freedmen. They wanted their own land to farm and believed they had already paid for land by their years of uncompensated labor and the trials of slavery. Henry C. Warmoth was the Republican governor of Louisiana from 1868 to 1874. As governor, Warmoth was plagued by accusations of corruption, which continued to be a matter of controversy long after his death. He was accused of using his position as governor to trade in state bonds for his personal benefit. In addition, the newspaper company which he owned received a contract from the state government. Warmoth supported the franchise for freedmen. Warmoth struggled to lead the state during the years when the White League, a white Democratic terrorist organization, conducted an open campaign of violence and intimidation against Republicans, including freedmen, with the goals of regaining Democratic power and white supremacy. They pushed Republicans from political positions, were responsible for the Coushatta Massacre, disrupted Republican organizing, and preceded elections with such intimidation and violence that black voting was sharply reduced. Warmoth stayed in Louisiana after Reconstruction, as white Democrats regained political control of the state. He died in 1931 at age 89. George Luke Smith, a New Hampshire native, served briefly in the U.S. House from Louisiana's 4th congressional district but was unseated in 1874 by the Democrat William M. Levy. He then left Shreveport for Hot Springs, Arkansas. George E. Spencer was a prominent Republican U.S. Senator. His 1872 reelection campaign in Alabama opened him to allegations of "political betrayal of colleagues; manipulation of Federal patronage; embezzlement of public funds; purchase of votes; and intimidation of voters by the presence of Federal troops." He was a major speculator in a distressed financial paper. Tunis Campbell, a black New York businessman, was hired in 1863 by Secretary of War Edwin M. Stanton to help former slaves in Port Royal, South Carolina. When the Civil War ended, Campbell was assigned to the Sea Islands of Georgia, where he engaged in an apparently successful land reform program for the benefit of the freedmen. He eventually became vice-chair of the Georgia Republican Party, a state senator and the head of an African-American militia which he hoped to use against the Ku Klux Klan. The "Brooks–Baxter War" was a factional dispute, 1872–74 that culminated in an armed confrontation in 1874 between factions of the Arkansas Republican Party over the disputed 1872 election for governor. The victor in the end was the "Minstrel" faction led by carpetbagger Elisha Baxter over the "Brindle Tail" faction led by Joseph Brooks, which included most of the scalawags. The dispute weakened both factions and the entire Republican Party, enabling the sweeping Democratic victory in the 1874 state elections. William Hines Furbush, born a mixed-race slave in Carroll County, Kentucky in 1839 received part of his education in Ohio. He migrated to Helena, Arkansas in 1862. After returning to Ohio in February 1865, he joined the Forty-second Colored Infantry. After the war, Furbush migrated to Liberia through the American Colonization Society, where he continued to work as a photographer. He returned to Ohio after 18 months and moved back to Arkansas by 1870. Furbush was elected to two terms in the Arkansas House of Representatives, 1873–74 (from an African-American majority district in the Arkansas Delta, made up of Phillips and Monroe counties.) He served in 1879–80 from the newly established Lee County. In 1873, the state passed a civil rights law. Furbush and three other black leaders, including the bill's primary sponsor, state senator Richard A. Dawson, sued a barkeeper in Little Rock, Arkansas for refusing to serve their group. The suit resulted in the only successful Reconstruction prosecution under the state's civil rights law. In the legislature, Furbush worked to create Lee County, Arkansas, from portions of Phillips County, Crittenden County, Monroe County, and St. Francis County in eastern Arkansas, which had a black-majority population. Following the end of his 1873 legislative term, Furbush was appointed as county sheriff by Republican Governor Elisha Baxter. Furbush twice won re-election as sheriff, serving from 1873 to 1878. During his term, he adopted a policy of "fusion", a post-Reconstruction power-sharing compromise between Populist Democrats and Republicans. Furbush originally was elected as a Republican, but he switched to the Democratic Party at the end of his time as sheriff. Democrats held most of the economic power and cooperating with them could make his future. In 1878, Furbush was elected again to the Arkansas House. His election is notable because he was elected as a black Democrat during a campaign season notorious for white intimidation of black and Republican voters in black-majority eastern Arkansas. He was the first-known black Democrat elected to the Arkansas General Assembly. In March 1879, Furbush left Arkansas for Colorado. He returned to Arkansas in 1888, setting up practice as a lawyer. In 1889, he co-founded the African American newspaper National Democrat. He left the state in the 1890s after it disenfranchised black voters. Furbush died in Indiana in 1902 at a veterans' home. Carpetbaggers were least numerous in Texas. Republicans controlled the state government from 1867 to January 1874. Only one state official and one justice of the state supreme court were Northerners. About 13% to 21% of district court judges were Northerners, along with about 10% of the delegates who wrote the Reconstruction constitution of 1869. Of the 142 men who served in the 12th Legislature, some 12 to 29 were from the North. At the county level, Northerners made up about 10% of the commissioners, county judges and sheriffs. George Thompson Ruby, an African American from New York City, who grew up in Portland, Maine, worked as a teacher in New Orleans from 1864 until 1866 when he migrated to Texas. There he was assigned to Galveston, Texas as an agent and teacher for the Freedmen's Bureau. Active in the Republican Party and elected as a delegate to the state constitutional convention in 1868–1869, Ruby was later elected as a Texas state senator and had wide influence. He supported construction of railroads to support Galveston business. He was instrumental in organizing African-American dockworkers into the Labor Union of Colored Men, to gain them jobs at the docks after 1870. When Democrats regained control of the state government in 1874, Ruby returned to New Orleans, working in journalism. He also became a leader of the Exoduster movement. Blacks from the Deep South migrated to homestead in Kansas in order to escape white supremacist violence and the oppression of segregation. Historiography The Dunning school of American historians (1900–1950) espoused White supremacy and viewed "carpetbaggers" unfavorably, arguing that they degraded the political and business culture. The revisionist school in the 1930s called them stooges of Northern business interests. After 1960 the neoabolitionist school emphasized their moral courage. Modern use In the late 1990s, carpetbagging was used as a term in Great Britain during the wave of demutualizations of building societies. It described people who joined mutual societies with the hope of making a quick profit from their conversion to joint stock companies. Those so-called carpetbaggers were roving financial opportunists, often of modest means, who spotted investment opportunities and aimed to benefit from a set of circumstances to which they were not ordinarily entitled. The best opportunities for carpetbaggers came from opening membership accounts at building societies to qualify for windfall gains, running into thousands of pounds, from the process of conversion and takeover. The influx of such transitory "token" members, who took advantage of the deposit criteria, often instigated or accelerated the demutualisation of the organisation. The new investors in those mutuals would receive shares in the newly created public companies, usually distributed at a flat rate, which equally benefited small and large investors, providing a broad incentive for members to vote for leadership candidates who were pushing for demutualisation. Carpetbaggers first was used in this context in early 1997 by the chief executive of the Woolwich Building Society, who announced the society's conversion with rules removing the entitlement of the most recent new savers to potential windfalls, stating in a media interview, "I have no qualms about disenfranchising carpetbaggers." Between 1997 and 2002, a group of pro-demutualization supporters, "Members for Conversion", operated a website, carpetbagger.com, which highlighted the best ways of opening share accounts with UK building societies, and organised demutualisation resolutions. [full citation needed] That led many building societies to implement anti-carpetbagging policies, such as not accepting new deposits from customers who lived outside the normal operating area of the society. Another measure was to insert a charitable assignment clause for new members into the constitution of the organisation, requiring customers opening a savings account to sign a declaration agreeing that any windfall conversion benefits to which they might become entitled would be assigned to the Charities Aid Foundation. The term continues to be used within the co-operative movement to, for example, refer to the demutualisation of housing cooperatives. The analogous term to carpetbagging in Britain is "chicken run", to denote an MP running in a safer constituency to seek re-election. The term was first used by the Labour Party to describe Norman Lamont's move from Kingston-upon-Thames in London to Harrogate and Knaresborough in North Yorkshire. The term has been used at subsequent elections, to describe MPs including Shaun Woodward (Witney to St Helens South), Mims Davies (Eastleigh to Mid Sussex), Kieran Mullan (Crewe and Nantwich to Bexhill and Battle), and Richard Holden (North West Durham to Basildon and Billericay). The term carpetbagger has also been applied to those who join the Labour Party but lack roots in the working class that the party was formed to represent. During World War II, the U.S. Office of Strategic Services surreptitiously supplied necessary tools and materials to resistance groups in Europe. The OSS called this effort Operation Carpetbagger. The modified B-24 aircraft used for the night-time missions were referred to as "carpetbaggers". (Among other special features, they were painted a glossy black to make them less visible to searchlights.) Between January and September 1944, Operation Carpetbagger operated 1,860 sorties between RAF Harrington, England, and various points in occupied Europe. British Agents used this "noise" as cover for their use of Carpetbagger for the nominated Agent who was carrying monies [authentic and counterfeit] to the Underground/Resistance.[citation needed] In Australia, "carpetbagger" may refer to unscrupulous dealers and business managers in indigenous Australian art. The term was also used by John Fahey, a former Premier of New South Wales and federal Liberal finance minister, in the context of shoddy "tradespeople" who travelled to Queensland to take advantage of victims following the 2010–2011 Queensland floods. In the United States, the common modern usage, usually derogatory, refers to politicians who move to different states, districts or areas to run for office despite their lack of local ties or familiarity. The term is now sometimes even used for politicians who relocate from the South to the North for politically opportunistic reasons. For example, former Arkansas First Lady Hillary Clinton was attacked by opponents as carpetbagging because she never resided in New York State or participated in the state's politics before the 2000 Senate race; Republican candidate and New York City mayor Rudy Giuliani mocked Clinton by putting an Arkansas flag on top of the New York City Hall. Notable examples A carpetbag steak or carpetbagger steak is an end cut of steak that is pocketed and stuffed with oysters, among other ingredients, such as mushrooms, blue cheese, and garlic. The steak is sutured with toothpicks or thread, and it sometimes is wrapped in bacon. The combination of beef and oysters is traditional. The earliest specific reference is in a United States newspaper in 1891. The earliest specific Australian reference is a printed recipe sometime between 1899 and 1907. In French politics, carpetbagging is known as parachutage, which means parachuting in French. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/2012_United_States_presidential_election] | [TOKENS: 5366]
Contents 2012 United States presidential election Barack Obama Democratic Barack Obama Democratic Presidential elections were held in the United States on November 6, 2012. Incumbent Democratic president Barack Obama and his running mate, incumbent vice president Joe Biden, were elected to a second term. They defeated the Republican ticket of former governor of Massachusetts Mitt Romney and U.S. representative Paul Ryan of Wisconsin. As the incumbent president, Obama secured the Democratic nomination without serious opposition. The Republicans experienced a competitive primary. Romney was consistently competitive in the polls and won the support of many party leaders, but he faced challenges from a number of more conservative contenders. Romney secured his party's nomination in May, defeating former senator Rick Santorum, former speaker of the House and Georgia congressman Newt Gingrich, and Texas congressman Ron Paul, among other candidates. The campaigns focused heavily on domestic issues, and debate centered largely around sound responses to the Great Recession along with long-term federal budget issues, the future of social insurance programs, and the Affordable Care Act. Foreign policy was also discussed, including the end of the Iraq War in 2011, military spending, the Iranian nuclear program, and appropriate counteractions to terrorism. Romney claimed Obama's domestic policies were ineffective and financially insolvent while Obama's campaign sought to characterize Romney as a plutocratic businessman who was out of touch with the average American. The campaign was marked by a sharp rise in fundraising, including from nominally independent Super PACs. Obama defeated Romney, winning 332 Electoral College votes and 51.1% of the popular vote to Romney's 206 electoral votes and 47.2% of the popular vote. He became the third sitting president in a row (after Bill Clinton and George W. Bush) to win a second consecutive term. Obama carried all 18 "blue wall" states and defeated Romney in crucial swing states that Republicans had previously won in 2000 and 2004, namely Colorado, Florida, Nevada, Ohio, and Virginia. Despite his loss, Romney managed to flip Indiana, North Carolina, and Nebraska's 2nd congressional district from the 2008 election. Ultimately, Obama won eight of the nine main swing states, losing only North Carolina. As of 2026, this is the most recent presidential election in which the Democratic candidate won Iowa, Ohio, and Florida, along with Maine's 2nd congressional district. This also remains the most recent election in which an incumbent president won re-election to a second consecutive term, in which the incumbent presidential party won re-election, and in which the Democratic ticket did not include a woman. It is also the most recent presidential election in which the party that won the presidency did not also win control of both chambers of Congress, as well as the earliest presidential election in which all major party nominees for president and vice-president are still alive. Background In 2011, several state legislatures passed new voting laws, especially pertaining to voter identification, with the stated purpose of combating voter fraud; however, the laws were attacked by the Democratic Party as attempts to suppress voting among its supporters and to improve the Republican Party's presidential prospects. Florida, Georgia, Ohio, Tennessee, and West Virginia's state legislatures approved measures to shorten early voting periods. Florida and Iowa barred all felons from voting. Kansas, South Carolina, Tennessee, Texas, and Wisconsin state legislatures passed laws requiring voters to have government-issued IDs before they could cast their ballots. This meant typically that people without driver's licenses or passports had to gain new forms of ID. Former president Bill Clinton denounced them, saying, "There has never been in my lifetime, since we got rid of the poll tax and all the Jim Crow burdens on voting, the determined effort to limit the franchise that we see today". He was referring to Jim Crow laws passed in southern states near the turn of the twentieth century that disenfranchised most blacks from voting and excluded them from the political process for more than six decades. Clinton said the moves would effectively disenfranchise core voter blocs that trend liberal, including college students, black people, and Latinos. The Obama campaign fought against the Ohio law, pushing for a petition and statewide referendum to repeal it in time for the 2012 election. In addition, the Pennsylvania legislature proposed a plan to change its representation in the electoral college from the traditional winner-take-all model to a district-by-district model. As the governorship and both houses of its legislature were Republican-controlled, the move was viewed by some as an attempt to reduce Democratic chances. Ultimately, they did not do it, leaving their winner take all format intact as of 2020. Nominations With an incumbent president running for re-election against token opposition, the race for the Democratic nomination was largely uneventful. The nomination process consisted of primaries and caucuses, held by the 50 states, as well as Guam, Puerto Rico, Washington, D.C., U.S. Virgin Islands, American Samoa, and Democrats Abroad. Additionally, high-ranking party members known as superdelegates each received one vote in the convention. A few of the primary challengers surpassed the president's vote total in individual counties in several of the seven contested primaries, though none made a significant impact in the delegate count. Running unopposed everywhere else, Obama cemented his status as the Democratic presumptive nominee on April 3, 2012, by securing the minimum number of pledged delegates needed to obtain the nomination. Candidates with considerable name recognition who entered the race for the Republican presidential nomination in the early stages of the primary campaign included U.S. representative and former Libertarian nominee Ron Paul, former Minnesota governor Tim Pawlenty, who co-chaired John McCain's campaign in 2008, former Massachusetts governor Mitt Romney, the runner-up for the nomination in the 2008 cycle, and former Speaker of the House Newt Gingrich. The first debate took place on May 5, 2011, in Greenville, South Carolina, with businessman Herman Cain, former New Mexico governor Gary Johnson, Ron Paul, Tim Pawlenty, and former Pennsylvania senator Rick Santorum participating. Another debate took place a month later, with Newt Gingrich, Mitt Romney, former Utah governor Jon Huntsman, and Minnesota congresswoman Michele Bachmann participating, and Gary Johnson excluded. A total of thirteen debates were held before the Iowa caucuses. The first major event of the campaign was the Ames Straw Poll, which took place in Iowa on August 13, 2011. Michele Bachmann won the straw poll (this ultimately proved to be the acme of her campaign). Pawlenty withdrew from the race after a poor showing in the straw poll, as did Thaddeus McCotter, the only candidate among those who qualified for the ballot who was refused entrance into the debate. It became clear at around this point in the nomination process that while Romney was considered to be the likely nominee by the Republican establishment, a large segment of the conservative primary electorate found him to be too moderate for their political views. As a result, a number of potential "anti-Romney" candidates were put forward, including future president Donald Trump, former Alaska governor and 2008 vice-presidential nominee Sarah Palin, New Jersey governor Chris Christie, and Texas governor Rick Perry, the last of whom decided to run in August 2011. Perry did poorly in the debates, however, and Herman Cain and then Newt Gingrich came to the fore in October and November. Due to a number of scandals, Cain withdrew just before the end of the year, after having ballot placement in several states. Around the same time, Johnson, who had been able to get into only one other debate, withdrew to seek the Libertarian Party nomination. For the first time in modern Republican Party history, three different candidates won the first three state contests in January (the Iowa caucuses, the New Hampshire primary, and the South Carolina primary). Although Romney had been expected to win in at least Iowa and New Hampshire, Rick Santorum won the non-binding poll at caucus sites in Iowa by 34 votes, as near as could be determined from the incomplete tally, earning him a declaration as winner by state party leaders, although vote totals were missing from eight precincts. The election of county delegates at the caucuses would eventually lead to Ron Paul earning 22 of the 28 Iowa delegates to the Republican National Convention. Newt Gingrich won South Carolina by a surprisingly large margin, and Romney won only in New Hampshire. A number of candidates dropped out at this point in the nomination process. Bachmann withdrew after finishing sixth in the Iowa caucuses, Huntsman withdrew after coming in third in New Hampshire, and Perry withdrew when polls showed him drawing low numbers in South Carolina. Santorum, who had previously run an essentially one-state campaign in Iowa, was able to organize a national campaign after his surprising victory there. He unexpectedly carried three states in a row on February 7 and overtook Romney in nationwide opinion polls, becoming the only candidate in the race to effectively challenge the notion that Romney was the inevitable nominee. However, Romney won all of the other contests between South Carolina and the Super Tuesday primaries, and regained his first-place status in nationwide opinion polls by the end of February. The Super Tuesday primaries took place on March 6. Romney carried six states, Santorum carried three, and Gingrich won only in his home state of Georgia. Throughout the rest of March, 266 delegates were allocated in 12 events, including the territorial contests and the first local conventions that allocated delegates (Wyoming's county conventions). Santorum won Kansas and three Southern primaries, but he was unable to make any substantial gain on Romney, who became a formidable frontrunner after securing more than half of the delegates allocated in March. On April 10, Santorum suspended his campaign due to a variety of reasons, such as a low delegate count, unfavorable polls in his home state of Pennsylvania, and his daughter's health, leaving Mitt Romney as the undisputed front-runner for the presidential nomination and allowing Gingrich to claim that he was "the last conservative standing" in the campaign for the nomination. After disappointing results in the April 24 primaries (finishing second in one state, third in three, and fourth in one), Gingrich dropped out on May 2 in a move that was seen as an effective end to the contest for the nomination. After Gingrich's spokesman announced his upcoming withdrawal, the Republican National Committee declared Romney the party's presumptive nominee. Ron Paul officially remained in the race, but he stopped campaigning on May 14 to focus on state conventions. On May 29, after winning the Texas primary, Romney had received a sufficient number of delegates to clinch the party's nomination with the inclusion of unpledged delegates. After winning the June 5 primaries in California and several other states, Romney had received more than enough pledged delegates to clinch the nomination without counting unpledged delegates, making the June 26 Utah Primary, the last contest of the cycle, purely symbolic. CNN's final delegate estimate, released on July 27, 2012, put Romney at 1,462 pledged delegates and 62 unpledged delegates, for a total estimate of 1,524 delegates. No other candidate had unpledged delegates. The delegate estimates for the other candidates were Santorum at 261 delegates, Paul at 154, Gingrich at 142, Bachmann at 1, Huntsman at 1, and all others at 0. On August 28, 2012, delegates at the Republican National Convention officially named Romney the party's presidential nominee. Romney formally accepted the delegates' nomination on August 30, 2012. Four other parties nominated candidates that had ballot access or write-in access to at least 270 electoral votes, the minimum number of votes needed in the 2012 election to win the presidency through a majority of the electoral college. Campaigns Candidates in bold were on ballots representing 270 electoral votes. All other candidates were on the ballots of fewer than 10 states, 100 electors, and less than 20% of voters nationwide. The United States presidential election of 2012 broke new records in financing, fundraising, and negative campaigning. Through grassroots campaign contributions, online donations, and Super PACs, Obama and Romney raised a combined total of more than $2 billion. Super PACs constituted nearly one-fourth of the total financing, with most coming from pro-Romney PACs. Obama raised $690 million through online channels, beating his record of $500 million in 2008. Most of the advertising in the 2012 presidential campaign was decidedly negative—80% of Obama's ads and 84% of Romney's ads were negative. The tax-exempt non-profit Americans for Prosperity, a so-called "outside group", that is, a political advocacy group that is not a political action committee or super-PAC, ran a television advertising campaign opposing Obama described by The Washington Post as "early and relentless". Americans for Prosperity spent $8.4 million in swing states on television advertisements denouncing the American Recovery and Reinvestment Act of 2009 loan guarantee to Solyndra, a manufacturer of solar panels that went bankrupt, an advertising campaign described by The Wall Street Journal in November 2011 as "perhaps the biggest attack on Mr. Obama so far". The Commission on Presidential Debates held four debates during the last weeks of the campaign: three presidential and one vice-presidential. The major issues debated were the economy and jobs, the federal budget deficit, taxation and spending, the future of Social Security, Medicare, and Medicaid, healthcare reform, education, social issues, immigration, and foreign policy. Debate schedule: An independent presidential debate featuring minor party candidates took place on Tuesday, October 23 at the Hilton Hotel in Chicago, Illinois. The debate was moderated by Larry King and organized by the Free & Equal Elections Foundation. The participants were Gary Johnson (Libertarian), Jill Stein (Green), Virgil Goode (Constitution), and Rocky Anderson (Justice). A second debate between Stein and Johnson took place on Sunday, November 4, and was moderated by Ralph Nader. Elections analysts and political pundits issue probabilistic forecasts of the composition of the Electoral College. These forecasts use a variety of factors to estimate the likelihood of each candidate winning the Electoral College electors for that state. Most election predictors use the following ratings: Below is a list of states considered by one or more forecast to be competitive; states that are deemed to be "safe" or "solid" by forecasters RealClearPolitics, Sabato's Crystal Ball, and FiveThirtyEight. Timeline Results On the day of the election, spread betting firm Spreadex offered an Obama Electoral College Votes spread of 296–300 to Romney's 239–243. In fact Obama's victory over Romney was greater, winning 332 electoral votes to Romney's 206. Obama received 51 percent of the national popular vote to Romney's 47 percent and carried eight of the nine states considered to be electoral battlegrounds. Altogether, Obama won in 26 states plus the District of Columbia, while Romney carried 24 states. Of the 3,154 counties/districts/independent cities making returns, Romney won the most popular votes in 2,447 (77.58%) while Obama carried 707 (22.42%). The results of the electoral vote were certified by Congress on January 4, 2013. Popular vote totals are from the Federal Election Commission report. The table below displays the official vote tallies by each state's Electoral College voting method. The source for the results of all states, except those that amended their official results, is the official Federal Election Commission report. The column labeled "Margin" shows Obama's margin of victory over Romney (the margin is negative for every state that Romney won). Maine and Nebraska each allow for their election results votes to be split between candidates. The winner within each congressional district gets one electoral vote for the district. The winner of the statewide vote gets two additional electoral votes. In the 2012 election, all four of Maine's electoral votes were won by Obama and all five of Nebraska's electoral votes were won by Romney. Red denotes states (or congressional districts that contribute an electoral vote) won by Republican Mitt Romney; blue denotes those won by Democrat Barack Obama. State where the margin of victory was under 1% (29 electoral votes): States where the margin of victory was under 5% (46 electoral votes): States/districts where the margin of victory was between 5% and 10% (120 electoral votes): Counties with highest percentage of Democratic vote: Counties with highest percentage of Republican vote: After the networks called Ohio (the state that was arguably the most critical for Romney, as no Republican had ever won the presidency without carrying it) for Obama at around 11:15 pm EST on Election Day, Romney was ready to concede the race, but hesitated when Karl Rove strenuously objected on Fox News to the network's decision to make that call. However, after Colorado and Nevada were called for the President (giving Obama enough electoral votes to win even if Ohio were to leave his column), in tandem with Obama's apparent lead in Florida and Virginia (both were eventually called for Obama), Romney acknowledged that he had lost and conceded at around 1:00 am EST on November 7. Despite public polling showing Romney behind Obama in the swing states of Nevada, Colorado, Iowa, Wisconsin, Ohio, and New Hampshire, tied with Obama in Virginia, and just barely ahead of Obama in Florida, the Romney campaign said they were genuinely surprised by the loss, having believed that public polling was oversampling Democrats. The Romney campaign had already set up a transition website, and had scheduled and purchased a fireworks display to celebrate in case he won the election. On November 30, 2012, it was revealed that shortly before the election, internal polling done by the Romney campaign had shown Romney ahead in Colorado and New Hampshire, tied in Iowa, and within a few points of Obama in Wisconsin, Pennsylvania, Minnesota, and Ohio. In addition, the Romney campaign had assumed that they would win Florida, North Carolina and Virginia. The polls had made Romney and his campaign team so confident of their victory that Romney did not write a concession speech until Obama's victory was announced. Foreign leaders reacted with both positive and mixed messages. Most world leaders congratulated and praised Obama on his re-election victory. However, Venezuela and some other states had tempered reactions. Pakistan commented that Romney's defeat had made Pakistan-United States relations safer. Stock markets fell noticeably after Obama's re-election, with the Dow Jones Industrial Average, NASDAQ, and the S&P 500 each declining over two percent the day after the election. All 50 states had a petition on the White House website We The People calling for their state to secede from the union. These petitions were created by individual people, with some gaining thousands of signatures. Voter demographics The United States has a population of 50 million Hispanic and Latino Americans, 27 million of whom are citizens eligible to vote (13% of total eligible voters). Traditionally, only half of eligible Hispanic voters vote (around 7% of voters); of them, 71% voted for Barack Obama (increasing his percentage of the vote by 5%); therefore, the Hispanic vote was an important factor in Obama's re-election, since the vote difference between the two main parties was only 3.9% Exit polls were conducted by Edison Research of Somerville, New Jersey, for the National Election Pool, a consortium which at the time consisted of ABC News, Associated Press, CBS News, CNN, Fox News, and NBC News. Analysis Combined with the re-election victories of his two immediate predecessors, Bill Clinton in 1996 and George W. Bush in 2004, Obama's re-election victory in 2012 marked only the second time in American history that three consecutive presidents were each won re-elections after the consecutive two-term presidencies of Thomas Jefferson, James Madison, and James Monroe ending in 1820, which is the only other time any two-term president succeeded another. This was also the first election since 1928 in which neither of the major candidates had any military experience. The election was arguably decided by three counties: Miami-Dade County (Florida); Cuyahoga County (Ohio) and Philadelphia (Pennsylvania). If these three counties had cast zero votes, Obama would have lost all three states and the election. This election marked the first time since Franklin D. Roosevelt's last two re-elections in 1940 and 1944 that the Democratic nominee won a majority of the popular vote in two consecutive elections. Obama also became the first president of either party to secure a majority of the popular vote in two elections since Ronald Reagan in 1980 and 1984. Obama is the third Democratic president to secure at least 51% of the vote twice, after Andrew Jackson and Franklin D. Roosevelt. Romney won the popular vote in 226 congressional districts making this the first time since 1960 that the winner of the election did not win the popular vote in a majority of the congressional districts. This is the last time that the Democrats won a majority of states in a presidential election. Romney lost his home state of Massachusetts, becoming the first major party presidential candidate to lose his home state since Democrat Al Gore lost his home state of Tennessee to Republican George W. Bush in 2000. Romney lost his home state by more than 23%, the worst losing margin for a major party candidate since John Frémont in 1856. Even worse than Frémont, Romney failed to win a single county in his home state, something last seen by Theodore Roosevelt in 1912. In addition, since Obama carried Ryan's home state of Wisconsin, the Romney–Ryan ticket was the first major party ticket since 1972 to have both of its nominees lose their home states. Romney won the popular vote in every county of three states: Utah, Oklahoma, and West Virginia; Obama did so in four states: Vermont, Massachusetts, Rhode Island, and Hawaii. Romney's loss prompted the Republican National Committee to try to appeal to the American Latino population by concentrating on different approaches to immigration. These were short-lived due to activity and anger from the Republican base and may have contributed to the selection of Donald Trump as their party's presidential nominee four years later. Gary Johnson's popular vote total set a Libertarian Party record, and his popular vote percentage was the second-best showing for a Libertarian in a presidential election, trailing only Ed Clark's in 1980. Johnson would go on to beat this record in the 2016 presidential election, winning the most votes for the Libertarian ticket in history. At the time, Green Party candidate Jill Stein's popular vote total made her the most successful female presidential candidate in a general election in United States history. This was later surpassed by Hillary Clinton in 2016. Obama is the fifth Democrat to have won re-election to a consecutive term, along with Andrew Jackson, Woodrow Wilson, Franklin D. Roosevelt, and Bill Clinton. This presidential election was the most recent in which no state split its electoral votes, and the most recent in which a candidate received over 60% of the Electoral College vote. This is the most recent election in which two major party nominees would go on to become president (namely, Obama and Biden). Obama's vote total was the fourth most votes received (behind Obama's 2008 victory and both major candidates in 2020) prior to 2024 and the most ever for a re-elected incumbent president.[b] The election marked the first time since 1988 in which no state was won by a candidate with a plurality of the state's popular vote. Furthermore, it is the only post-World War II presidential election in which no states were won by margins smaller than 30,000 votes. Obama's narrowest victories were in New Hampshire by 39,643 votes, followed by Florida by 74,309 votes. Every other presidential election in modern history has seen states narrowly won by several thousand votes. As of 2025, this is also the most recent election in which both all of Maine and Nebraska's electoral votes were won by one candidate. So far, this is the only presidential election in history where both the Republican and Democratic vice presidential candidates are practicing Roman Catholics. It is also the only presidential election where there are no white Protestants on a major party ticket. This is the most recent election where any party won consecutive elections. Obama was the first Democrat since Franklin D. Roosevelt to win a majority of the popular vote more than once. While Obama was the first president since Dwight D. Eisenhower in 1952 and 1956 to receive more than 51% of the popular vote twice, he was also the first president since Franklin D. Roosevelt in 1936, 1940, and 1944 to win consecutive presidential elections with declining percentages of the popular vote.[c] Obama was the most recent of just four presidents in United States history to win re-election with a lower percentage of the electoral vote than in their prior elections, the other three were James Madison in 1812, Woodrow Wilson in 1916 and Franklin D. Roosevelt in 1940 and 1944. Additionally, Obama was the fifth of only five presidents to win re-election with a smaller percentage of the popular vote than in prior elections, the other four are James Madison in 1812, Andrew Jackson in 1832, Grover Cleveland in 1892, and Franklin D. Roosevelt in 1940 and 1944. All four major candidates for president and vice president went on to hold significant public office after this election. Obama and Biden served their second terms as president and vice president, respectively. Biden initially retired from politics after leaving office, but later was elected president in 2020, defeating Trump, and served one term. Romney moved to Utah in 2014 and was elected to the Senate there in 2018, succeeding Orrin Hatch, and served one term. Ryan served three more terms in the House and eventually served as Speaker from 2015 until his retirement from politics in 2019. Gallery See also Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Federal_Trade_Commission_Act_of_1914] | [TOKENS: 1895]
Contents Federal Trade Commission Act of 1914 The Federal Trade Commission Act of 1914 is a United States federal law which established the Federal Trade Commission. The Act was signed into law by US President Woodrow Wilson in 1914 and outlaws unfair methods of competition and unfair acts or practices that affect commerce. Background The inspiration and motivation for this act started in 1890, when the Sherman Antitrust Act was passed. There was a strong antitrust movement to prevent manufacturers from joining price-fixing cartels. After Northern Securities Co. v. United States, a 1904 case that dismantled a J. P. Morgan company, antitrust enforcement became institutionalized. Soon, US President Theodore Roosevelt created the Bureau of Corporations, an agency that reported on the economy and businesses in the industry. The agency was the predecessor to the Federal Trade Commission. In 1913, Congress expanded on the agency by passing the Federal Trade Commissions Act and the Clayton Antitrust Act. The Federal Trade Commission Act was designed for business reform. Congress passed the act in the hopes of protecting consumers against methods of deception in advertisement and of forcing the business to be upfront and truthful about items being sold. The act was part of a bigger movement in the early 20th century to use special groups like commissions to regulate and oversee certain forms of business. The Federal Trade Commission Act works in conjunction with the Sherman Act and the Clayton Act. Any violations of the Sherman Act also violates the Federal Trade Commission Act and so the Federal Trade Commission can act on cases that violate either act. The Federal Trade Commission Act and both antitrust laws were created for the sole objective to "protect the process of competition for the benefit of consumers, making sure there are strong incentives for businesses to operate efficiently, keep prices down, and keep quality up." The acts are considered the core of antitrust laws and are still very important in today's society. This commission was authorized to issue "cease and desist" orders to large corporations to curb unfair trade practices. In addition, the Federal Trade Commission Act is also considered a measure that protects privacy since it allows the FTC to penalize companies that violate their own policies by false advertising and other actions that can harm consumers. Some of the unfair methods of competition that were targeted include deceptive advertisements and pricing. The act passed the Senate by a 43-5 vote on September 8, 1914 and the House on September 10 without a tally of yeas and nays. It was signed into law by President Wilson on September 26. Summary The Federal Trade Commission Act does more than create the Commission: Under this Act, the Commission is empowered, among other things, to (a) prevent unfair methods of competition, and unfair or deceptive acts or practices in or affecting commerce; (b) seek monetary redress and other relief for conduct injurious to consumers; (c) prescribe trade regulation rules defining with specificity acts or practices that are unfair or deceptive, and establishing requirements designed to prevent such acts or practices; (d) conduct investigations relating to the organization, business, practices, and management of entities engaged in commerce; and (e) make reports and legislative recommendations to Congress. The FTC Act prohibits unfair methods of competition, unfair or deceptive acts or practices in or affecting commerce. The Commission is empowered to enforce the act's provisions against all persons, partnerships or corporations, with several exceptions, including banks, savings and loans institutions, federal credit unions—each as described in the FTC Act. Banks, savings and loans institutions, federal credit unions and certain other financial entities are instead under the jurisdiction of the Consumer Financial Protection Bureau. The Commission enforces the FTC Act through its federal rulemaking authority to issue industry-wide rules and regulations, adjudicatory powers, and statutory authority to file civil actions in certain circumstances. The FTC Act does not give consumers the right to sue for violations of the act, but consumers may complain to the Commission about acts or practices they believe to be unfair or deceptive. Consumers may, however, be authorized to sue under a state "UDAP" (unfair, deceptive and abusive practices) statute, sometimes called a "Little FTC Act." An act or practice is "deceptive" under the FTC Act when there is a representation, omission or practice that is likely to mislead a consumer acting reasonably in the circumstances. The representation, omission or practice must also be material, in that it is likely to affect the consumer's conduct or decision regarding the product or service. If the representation or practice is directed to a particular group, the Commission will consider reasonableness from that targeted group's perspective. Notably, there is no requirement that the actor intend for their acts to be misleading. An act or practice is "unfair" under the FTC Act if it "causes or is likely to cause substantial injury" to consumers when the injury is "not reasonably avoidable by consumers themselves." Further, for an act or practice to be unfair, the injury cannot be outweighed by countervailing benefits to consumers or competition. An example of an injury that rises to the level of "substantial" for unfairness purposes would be the coercion of consumers into purchasing defective goods or services on credit without the ability to assert creditor claims or defenses against the transaction. Although public policy is not a specific criterion, it may be considered in determining how substantial an injury might be. Enforcement If after investigating, the Commission has reason to believe an actor has violated the FTC Act's prohibition on unfair methods of competition or unfair or deceptive acts or practices, and that a proceeding against the actor is in the public's best interest, the Commission is authorized to commence administrative proceedings against the actor in administrative court. Other parties may apply to intervene and appear at the hearing. If, after the administrative hearing, the Commission determines the actor has violated the FTC Act's prohibitions on unfair and deceptive acts, it must provide the actor with findings of fact and issue and serve a cease and desist order against the violation. The enjoined party may appeal the FTC's cease and desist order to the U.S. Court of Appeals in "any circuit where the method of competition or act or practice in question was used or where such person, partnership or corporation resides or carries on business . . . ." When a cease and desist order against a person's act or practice of unfair and deceptive practices becomes final, the Commission may then seek relief for the violation in either a U.S. district court or "in any competent jurisdiction of a State." If the court determines that the act or practice in question is "one in which a reasonable man would have known under the circumstances was dishonest or fraudulent," the court may grant relief that the "court finds necessary to redress injury to consumers or other persons, partnerships, and corporations" resulting from the violation or unfair or deceptive act or practice. The statute provides a non-exhaustive list of relief available, including rescission or reformation of contracts, refunds or returns of property, damages, or public notice of the violation. In addition, if an actor subject to a cease and desist order violates the Commission's final and in-effect order to cease and desist engaging in an unfair or deceptive act or practice, the enjoined actor is automatically liable for a civil penalty up to $10,000 per violation, the amount of which is to be determined by a district court. In such circumstances, the FTC Act gives U.S. district courts the power to grant mandatory injunctions and "such other and further equitable relief as they deem appropriate" in order to enforce the Commission's final order. The Commission is also authorized to commence civil actions in a U.S. district court—without first adjudicating the matter in administrative court—against actors it finds to be in violation of the Commission's promulgated rules prohibiting deceptive and unfair practices. It may do so, however, only in certain circumstances, including if it determines that the actor had actual knowledge or "knowledge fairly implied on the basis of objective circumstances" that the act is unfair or deceptive. If the Commission issued a final and in-effect cease and desist order through its administrative proceedings with regard to an unlawful act or practice, it may initiate civil proceedings against another actor for engaging in the same unlawful act or practice, even when the new actor was not subject to the initial cease and desist order. However, the Commission may do so only if the actor had engaged in the act or practice with "actual knowledge" that the act or practice was both "unfair or deceptive" and unlawful. Actual knowledge can be established with a showing that the Commission provided the actor with a copy of its determination or a synopsis of such determination that led to the relevant cease and desist order. Both types of actions will result in an up to $10,000 civil penalty to be determined by the court. The FTC Act also authorizes the Commission in particular cases to obtain a permanent injunction through a civil action in federal court against any actor under the Commission's jurisdiction if it believes the actor "is violating, or is about to violate, any provision of law" enforced by the Commission. The U.S. Supreme Court has determined that the provision providing the Commission with its power to seek a permanent injunction does not give it the extra power to seek an award of "equitable monetary relief such as restitution or disgorgement." References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Electromagnetism] | [TOKENS: 2947]
Contents Electromagnetism In physics, electromagnetism is an interaction that occurs between particles with electric charge via electromagnetic fields. The electromagnetic force is one of the four fundamental forces of nature. It is the dominant force in the interactions of atoms and molecules. Electromagnetism can be thought of as a combination of electrostatics and magnetism, which are distinct but closely intertwined phenomena. Electromagnetic forces occur between any two charged particles. Electric forces cause an attraction between particles with opposite charges and repulsion between particles with the same charge, while magnetism is an interaction that occurs between charged particles in relative motion. These two forces are described in terms of electromagnetic fields. Macroscopic charged objects are described in terms of Coulomb's law for electricity and Ampère's force law for magnetism; the Lorentz force describes microscopic charged particles. The electromagnetic force is responsible for many of the chemical and physical phenomena observed in daily life. The electrostatic attraction between atomic nuclei and their electrons holds atoms together. Electric forces also allow different atoms to combine into molecules, including the macromolecules such as proteins that form the basis of life. Meanwhile, magnetic interactions between the spin and angular momentum magnetic moments of electrons also play a role in chemical reactivity; such relationships are studied in spin chemistry. Electromagnetism also plays several crucial roles in modern technology: electrical energy production, transformation and distribution; light, heat, and sound production and detection; fiber optic and wireless communication; sensors; computation; electrolysis; electroplating; and mechanical motors and actuators. Electromagnetism has been studied since ancient times. Many ancient civilizations, including the Greeks and the Mayans, created wide-ranging theories to explain lightning, static electricity, and the attraction between magnetized pieces of iron ore. However, it was not until the late 18th century that scientists began to develop a mathematical basis for understanding the nature of electromagnetic interactions. In the 18th and 19th centuries, prominent scientists and mathematicians such as Coulomb, Gauss and Faraday developed namesake laws which helped to explain the formation and interaction of electromagnetic fields. This process culminated in the 1860s with the discovery of Maxwell's equations, a set of four partial differential equations which provide a complete description of classical electromagnetic fields. Maxwell's equations provided a sound mathematical basis for the relationships between electricity and magnetism that scientists had been exploring for centuries, and predicted the existence of self-sustaining electromagnetic waves. Maxwell postulated that such waves make up visible light, which was later shown to be true. Gamma-rays, x-rays, ultraviolet, visible, infrared radiation, microwaves and radio waves were all determined to be electromagnetic radiation differing only in their range of frequencies. In the modern era, scientists continue to refine the theory of electromagnetism to account for the effects of modern physics, including quantum mechanics and relativity. The theoretical implications of electromagnetism, particularly the requirement that observations remain consistent when viewed from various moving frames of reference (relativistic electromagnetism) and the establishment of the speed of light based on properties of the medium of propagation (permeability and permittivity), helped inspire Einstein's theory of special relativity in 1905. Quantum electrodynamics (QED) modifies Maxwell's equations to be consistent with the quantized nature of matter. In QED, changes in the electromagnetic field are expressed in terms of discrete excitations, particles known as photons, the quanta of light. History Investigation into electromagnetic phenomena began about 5,000 years ago. There is evidence that the ancient Chinese, Mayan, and potentially even Egyptian civilizations knew that the naturally magnetic mineral magnetite had attractive properties, and many incorporated it into their art and architecture. Ancient people were also aware of lightning and static electricity, although they had no idea of the mechanisms behind these phenomena. The Greek philosopher Thales of Miletus discovered around 600 B.C.E. that amber could acquire an electric charge when it was rubbed with cloth, which allowed it to pick up light objects such as pieces of straw. Thales also experimented with the ability of magnetic rocks to attract one other, and hypothesized that this phenomenon might be connected to the attractive power of amber, foreshadowing the deep connections between electricity and magnetism that would be discovered over 2,000 years later. Despite all this investigation, ancient civilizations had no understanding of the mathematical basis of electromagnetism, and often analyzed its impacts through the lens of religion rather than science (lightning, for instance, was considered to be a creation of the gods in many cultures). Electricity and magnetism were originally considered to be two separate forces. This view changed with the publication of James Clerk Maxwell's 1873 A Treatise on Electricity and Magnetism in which the interactions of positive and negative charges were shown to be mediated by one force. There are four main effects resulting from these interactions, all of which have been clearly demonstrated by experiments: In April 1820, Hans Christian Ørsted observed that an electrical current in a wire caused a nearby compass needle to move. At the time of discovery, Ørsted did not suggest any satisfactory explanation of the phenomenon, nor did he try to represent the phenomenon in a mathematical framework. However, three months later he began more intensive investigations. Soon thereafter he published his findings, proving that an electric current produces a magnetic field as it flows through a wire. The CGS unit of magnetic induction (oersted) is named in honor of his contributions to the field of electromagnetism. His findings resulted in intensive research throughout the scientific community in electrodynamics. They influenced French physicist André-Marie Ampère's developments of a single mathematical form to represent the magnetic forces between current-carrying conductors. Ørsted's discovery also represented a major step toward a unified concept of energy. This unification, which was observed by Michael Faraday, extended by James Clerk Maxwell, and partially reformulated by Oliver Heaviside and Heinrich Hertz, is one of the key accomplishments of 19th-century mathematical physics. It has had far-reaching consequences, one of which was the understanding of the nature of light. Unlike what was proposed by the electromagnetic theory of that time, light and other electromagnetic waves are at present seen as taking the form of quantized, self-propagating oscillatory electromagnetic field disturbances called photons. Different frequencies of oscillation give rise to the different forms of electromagnetic radiation, from radio waves at the lowest frequencies, to visible light at intermediate frequencies, to gamma rays at the highest frequencies. Ørsted was not the only person to examine the relationship between electricity and magnetism. In 1802, Gian Domenico Romagnosi, an Italian legal scholar, deflected a magnetic needle using a Voltaic pile. The factual setup of the experiment is not completely clear, nor if current flowed across the needle or not. An account of the discovery was published in 1802 in an Italian newspaper, but it was largely overlooked by the contemporary scientific community, because Romagnosi seemingly did not belong to this community. An earlier (1735), and often neglected, connection between electricity and magnetism was reported by a Dr. Cookson. The account stated: A tradesman at Wakefield in Yorkshire, having put up a great number of knives and forks in a large box ... and having placed the box in the corner of a large room, there happened a sudden storm of thunder, lightning, &c. ... The owner emptying the box on a counter where some nails lay, the persons who took up the knives, that lay on the nails, observed that the knives took up the nails. On this the whole number was tried, and found to do the same, and that, to such a degree as to take up large nails, packing needles, and other iron things of considerable weight ... E. T. Whittaker suggested in 1910 that this particular event was responsible for lightning to be "credited with the power of magnetizing steel; and it was doubtless this which led Franklin in 1751 to attempt to magnetize a sewing-needle by means of the discharge of Leyden jars." A fundamental force The electromagnetic force is the second strongest of the four known fundamental forces and has unlimited range. All other forces, known as non-fundamental forces. (e.g., friction, contact forces) are derived from the four fundamental forces. At high energy, the weak force and electromagnetic force are unified as a single interaction called the electroweak interaction. Most of the forces involved in interactions between atoms are explained by electromagnetic forces between electrically charged atomic nuclei and electrons. The electromagnetic force is also involved in all forms of chemical phenomena. Electromagnetism explains how materials carry momentum despite being composed of individual particles and empty space. The forces we experience when "pushing" or "pulling" ordinary material objects result from intermolecular forces between individual molecules in our bodies and in the objects. The effective forces generated by the momentum of electrons' movement is a necessary part of understanding atomic and intermolecular interactions. As electrons move between interacting atoms, they carry momentum with them. As a collection of electrons becomes more confined, their minimum momentum necessarily increases due to the Pauli exclusion principle. The behavior of matter at the molecular scale, including its density, is determined by the balance between the electromagnetic force and the force generated by the exchange of momentum carried by the electrons themselves. Classical electrodynamics In 1600, William Gilbert proposed, in his De Magnete, that electricity and magnetism, while both capable of causing attraction and repulsion of objects, were distinct effects. Mariners had noticed that lightning strikes had the ability to disturb a compass needle. The link between lightning and electricity was not confirmed until Benjamin Franklin's proposed experiments in 1752 were conducted on 10 May 1752 by Thomas-François Dalibard of France using a 40-foot-tall (12 m) iron rod instead of a kite and he successfully extracted electrical sparks from a cloud. One of the first to discover and publish a link between human-made electric current and magnetism was Gian Romagnosi, who in 1802 noticed that connecting a wire across a voltaic pile deflected a nearby compass needle. However, the effect did not become widely known until 1820, when Ørsted performed a similar experiment. Ørsted's work influenced Ampère to conduct further experiments, which eventually gave rise to a new area of physics: electrodynamics. By determining a force law for the interaction between elements of electric current, Ampère placed the subject on a solid mathematical foundation. A theory of electromagnetism, known as classical electromagnetism, was developed by several physicists during the period between 1820 and 1873, when James Clerk Maxwell's treatise was published, which unified previous developments into a single theory, proposing that light was an electromagnetic wave propagating in the luminiferous ether. In classical electromagnetism, the behavior of the electromagnetic field is described by a set of equations known as Maxwell's equations, and the electromagnetic force is given by the Lorentz force law. One of the peculiarities of classical electromagnetism is that it is difficult to reconcile with classical mechanics, but it is compatible with special relativity. According to Maxwell's equations, the speed of light in vacuum is a universal constant that is dependent only on the electrical permittivity and magnetic permeability of free space. This violates Galilean invariance, a long-standing cornerstone of classical mechanics. One way to reconcile the two theories (electromagnetism and classical mechanics) is to assume the existence of a luminiferous aether through which the light propagates. However, subsequent experimental efforts failed to detect the presence of the aether. After important contributions of Hendrik Lorentz and Henri Poincaré, in 1905, Albert Einstein solved the problem with the introduction of special relativity, which replaced classical kinematics with a new theory of kinematics compatible with classical electromagnetism. (For more information, see History of special relativity.) In addition, relativity theory implies that in moving frames of reference, a magnetic field transforms to a field with a nonzero electric component and conversely, a moving electric field transforms to a nonzero magnetic component, thus firmly showing that the phenomena are two sides of the same coin. Hence the term "electromagnetism". (For more information, see Classical electromagnetism and special relativity and Covariant formulation of classical electromagnetism.) Today few problems in electromagnetism remain unsolved. These include: the lack of magnetic monopoles, Abraham–Minkowski controversy, the location in space of the electromagnetic field energy, and the mechanism by which some organisms can sense electric and magnetic fields. Extension to nonlinear phenomena The Maxwell equations are linear, in that a change in the sources (the charges and currents) results in a proportional change of the fields. Nonlinear dynamics can occur when electromagnetic fields couple to matter that follows nonlinear dynamical laws. This is studied, for example, in the subject of magnetohydrodynamics, which combines Maxwell theory with the Navier–Stokes equations. Another branch of electromagnetism dealing with nonlinearity is nonlinear optics. Quantities and units Here is a list of common units related to electromagnetism: In the electromagnetic CGS system, electric current is a fundamental quantity defined via Ampère's law and takes the permeability as a dimensionless quantity (relative permeability) whose value in vacuum is unity. As a consequence, the square of the speed of light appears explicitly in some of the equations interrelating quantities in this system. Formulas for physical laws of electromagnetism (such as Maxwell's equations) need to be adjusted depending on what system of units one uses. This is because there is no one-to-one correspondence between electromagnetic units in SI and those in CGS, as is the case for mechanical units. Furthermore, within CGS, there are several plausible choices of electromagnetic units, leading to different unit "sub-systems", including Gaussian, "ESU", "EMU", and Heaviside–Lorentz. Among these choices, Gaussian units are the most common today, and in fact the phrase "CGS units" is often used to refer specifically to CGS-Gaussian units. Applications The study of electromagnetism informs electric circuits, magnetic circuits, and semiconductor devices' construction. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/United_States#cite_note-FOOTNOTEFoner1988xxvii,_575–587-143] | [TOKENS: 17273]
Contents United States The United States of America (USA), also known as the United States (U.S.) or America, is a country primarily located in North America. It is a federal republic of 50 states and a federal capital district, Washington, D.C. The 48 contiguous states border Canada to the north and Mexico to the south, with the semi-exclave of Alaska in the northwest and the archipelago of Hawaii in the Pacific Ocean. The United States also asserts sovereignty over five major island territories and various uninhabited islands in Oceania and the Caribbean.[j] It is a megadiverse country, with the world's third-largest land area[c] and third-largest population, exceeding 341 million.[k] Paleo-Indians first migrated from North Asia to North America at least 15,000 years ago, and formed various civilizations. Spanish colonization established Spanish Florida in 1513, the first European colony in what is now the continental United States. British colonization followed with the 1607 settlement of Virginia, the first of the Thirteen Colonies. Enslavement of Africans was practiced in all colonies by 1770 and supplied most of the labor for the Southern Colonies' plantation economy. Clashes with the British Crown began as a civil protest over the illegality of taxation without representation in Parliament and the denial of other English rights. They evolved into the American Revolution, which led to the Declaration of Independence and a society based on universal rights. Victory in the 1775–1783 Revolutionary War brought international recognition of U.S. sovereignty and fueled westward expansion, further dispossessing native inhabitants. As more states were admitted, a North–South division over slavery led the Confederate States of America to declare secession and fight the Union in the 1861–1865 American Civil War. With the United States' victory and reunification, slavery was abolished nationally. By the late 19th century, the U.S. economy outpaced the French, German and British economies combined. As of 1900, the country had established itself as a great power, a status solidified after its involvement in World War I. Following Japan's attack on Pearl Harbor in 1941, the U.S. entered World War II. Its aftermath left the U.S. and the Soviet Union as rival superpowers, competing for ideological dominance and international influence during the Cold War. The Soviet Union's collapse in 1991 ended the Cold War, leaving the U.S. as the world's sole superpower. The U.S. federal government is a representative democracy with a president and a constitution that grants separation of powers under three branches: legislative, executive, and judicial. The United States Congress is a bicameral national legislature composed of the House of Representatives (a lower house based on population) and the Senate (an upper house based on equal representation for each state). Federalism grants substantial autonomy to the 50 states. In addition, 574 Native American tribes have sovereignty rights, and there are 326 Native American reservations. Since the 1850s, the Democratic and Republican parties have dominated American politics. American ideals and values are based on a democratic tradition inspired by the American Enlightenment movement. A developed country, the U.S. ranks high in economic competitiveness, innovation, and higher education. Accounting for over a quarter of nominal global GDP, its economy has been the world's largest since about 1890. It is the wealthiest country, with the highest disposable household income per capita among OECD members, though its wealth inequality is highly pronounced. Shaped by centuries of immigration, the culture of the U.S. is diverse and globally influential. Making up more than a third of global military spending, the country has one of the strongest armed forces and is a designated nuclear state. A member of numerous international organizations, the U.S. plays a major role in global political, cultural, economic, and military affairs. Etymology Documented use of the phrase "United States of America" dates back to January 2, 1776. On that day, Stephen Moylan, a Continental Army aide to General George Washington, wrote a letter to Joseph Reed, Washington's aide-de-camp, seeking to go "with full and ample powers from the United States of America to Spain" to seek assistance in the Revolutionary War effort. The first known public usage is an anonymous essay published in the Williamsburg newspaper The Virginia Gazette on April 6, 1776. Sometime on or after June 11, 1776, Thomas Jefferson wrote "United States of America" in a rough draft of the Declaration of Independence, which was adopted by the Second Continental Congress on July 4, 1776. The term "United States" and its initialism "U.S.", used as nouns or as adjectives in English, are common short names for the country. The initialism "USA", a noun, is also common. "United States" and "U.S." are the established terms throughout the U.S. federal government, with prescribed rules.[l] "The States" is an established colloquial shortening of the name, used particularly from abroad; "stateside" is the corresponding adjective or adverb. "America" is the feminine form of the first word of Americus Vesputius, the Latinized name of Italian explorer Amerigo Vespucci (1454–1512);[m] it was first used as a place name by the German cartographers Martin Waldseemüller and Matthias Ringmann in 1507.[n] Vespucci first proposed that the West Indies discovered by Christopher Columbus in 1492 were part of a previously unknown landmass and not among the Indies at the eastern limit of Asia. In English, the term "America" usually does not refer to topics unrelated to the United States, despite the usage of "the Americas" to describe the totality of the continents of North and South America. History The first inhabitants of North America migrated from Siberia approximately 15,000 years ago, either across the Bering land bridge or along the now-submerged Ice Age coastline. Small isolated groups of hunter-gatherers are said to have migrated alongside herds of large herbivores far into Alaska, with ice-free corridors developing along the Pacific coast and valleys of North America in c. 16,500 – c. 13,500 BCE (c. 18,500 – c. 15,500 BP). The Clovis culture, which appeared around 11,000 BCE, is believed to be the first widespread culture in the Americas. Over time, Indigenous North American cultures grew increasingly sophisticated, and some, such as the Mississippian culture, developed agriculture, architecture, and complex societies. In the post-archaic period, the Mississippian cultures were located in the midwestern, eastern, and southern regions, and the Algonquian in the Great Lakes region and along the Eastern Seaboard, while the Hohokam culture and Ancestral Puebloans inhabited the Southwest. Native population estimates of what is now the United States before the arrival of European colonizers range from around 500,000 to nearly 10 million. Christopher Columbus began exploring the Caribbean for Spain in 1492, leading to Spanish-speaking settlements and missions from what are now Puerto Rico and Florida to New Mexico and California. The first Spanish colony in the present-day continental United States was Spanish Florida, chartered in 1513. After several settlements failed there due to starvation and disease, Spain's first permanent town, Saint Augustine, was founded in 1565. France established its own settlements in French Florida in 1562, but they were either abandoned (Charlesfort, 1578) or destroyed by Spanish raids (Fort Caroline, 1565). Permanent French settlements were founded much later along the Great Lakes (Fort Detroit, 1701), the Mississippi River (Saint Louis, 1764) and especially the Gulf of Mexico (New Orleans, 1718). Early European colonies also included the thriving Dutch colony of New Nederland (settled 1626, present-day New York) and the small Swedish colony of New Sweden (settled 1638 in what became Delaware). British colonization of the East Coast began with the Virginia Colony (1607) and the Plymouth Colony (Massachusetts, 1620). The Mayflower Compact in Massachusetts and the Fundamental Orders of Connecticut established precedents for local representative self-governance and constitutionalism that would develop throughout the American colonies. While European settlers in what is now the United States experienced conflicts with Native Americans, they also engaged in trade, exchanging European tools for food and animal pelts.[o] Relations ranged from close cooperation to warfare and massacres. The colonial authorities often pursued policies that forced Native Americans to adopt European lifestyles, including conversion to Christianity. Along the eastern seaboard, settlers trafficked Africans through the Atlantic slave trade, largely to provide manual labor on plantations. The original Thirteen Colonies[p] that would later found the United States were administered as possessions of the British Empire by Crown-appointed governors, though local governments held elections open to most white male property owners. The colonial population grew rapidly from Maine to Georgia, eclipsing Native American populations; by the 1770s, the natural increase of the population was such that only a small minority of Americans had been born overseas. The colonies' distance from Britain facilitated the entrenchment of self-governance, and the First Great Awakening, a series of Christian revivals, fueled colonial interest in guaranteed religious liberty. Following its victory in the French and Indian War, Britain began to assert greater control over local affairs in the Thirteen Colonies, resulting in growing political resistance. One of the primary grievances of the colonists was the denial of their rights as Englishmen, particularly the right to representation in the British government that taxed them. To demonstrate their dissatisfaction and resolve, the First Continental Congress met in 1774 and passed the Continental Association, a colonial boycott of British goods enforced by local "committees of safety" that proved effective. The British attempt to then disarm the colonists resulted in the 1775 Battles of Lexington and Concord, igniting the American Revolutionary War. At the Second Continental Congress, the colonies appointed George Washington commander-in-chief of the Continental Army, and created a committee that named Thomas Jefferson to draft the Declaration of Independence. Two days after the Second Continental Congress passed the Lee Resolution to create an independent, sovereign nation, the Declaration was adopted on July 4, 1776. The political values of the American Revolution evolved from an armed rebellion demanding reform within an empire to a revolution that created a new social and governing system founded on the defense of liberty and the protection of inalienable natural rights; sovereignty of the people; republicanism over monarchy, aristocracy, and other hereditary political power; civic virtue; and an intolerance of political corruption. The Founding Fathers of the United States, who included Washington, Jefferson, John Adams, Benjamin Franklin, Alexander Hamilton, John Jay, James Madison, Thomas Paine, and many others, were inspired by Classical, Renaissance, and Enlightenment philosophies and ideas. Though in practical effect since its drafting in 1777, the Articles of Confederation was ratified in 1781 and formally established a decentralized government that operated until 1789. After the British surrender at the siege of Yorktown in 1781, American sovereignty was internationally recognized by the Treaty of Paris (1783), through which the U.S. gained territory stretching west to the Mississippi River, north to present-day Canada, and south to Spanish Florida. The Northwest Ordinance (1787) established the precedent by which the country's territory would expand with the admission of new states, rather than the expansion of existing states. The U.S. Constitution was drafted at the 1787 Constitutional Convention to overcome the limitations of the Articles. It went into effect in 1789, creating a federal republic governed by three separate branches that together formed a system of checks and balances. George Washington was elected the country's first president under the Constitution, and the Bill of Rights was adopted in 1791 to allay skeptics' concerns about the power of the more centralized government. His resignation as commander-in-chief after the Revolutionary War and his later refusal to run for a third term as the country's first president established a precedent for the supremacy of civil authority in the United States and the peaceful transfer of power. In the late 18th century, American settlers began to expand westward in larger numbers, many with a sense of manifest destiny. The Louisiana Purchase of 1803 from France nearly doubled the territory of the United States. Lingering issues with Britain remained, leading to the War of 1812, which was fought to a draw. Spain ceded Florida and its Gulf Coast territory in 1819. The Missouri Compromise of 1820, which admitted Missouri as a slave state and Maine as a free state, attempted to balance the desire of northern states to prevent the expansion of slavery into new territories with that of southern states to extend it there. Primarily, the compromise prohibited slavery in all other lands of the Louisiana Purchase north of the 36°30′ parallel. As Americans expanded further into territory inhabited by Native Americans, the federal government implemented policies of Indian removal or assimilation. The most significant such legislation was the Indian Removal Act of 1830, a key policy of President Andrew Jackson. It resulted in the Trail of Tears (1830–1850), in which an estimated 60,000 Native Americans living east of the Mississippi River were forcibly removed and displaced to lands far to the west, causing 13,200 to 16,700 deaths along the forced march. Settler expansion as well as this influx of Indigenous peoples from the East resulted in the American Indian Wars west of the Mississippi. During the colonial period, slavery became legal in all the Thirteen colonies, but by 1770 it provided the main labor force in the large-scale, agriculture-dependent economies of the Southern Colonies from Maryland to Georgia. The practice began to be significantly questioned during the American Revolution, and spurred by an active abolitionist movement that had reemerged in the 1830s, states in the North enacted laws to prohibit slavery within their boundaries. At the same time, support for slavery had strengthened in Southern states, with widespread use of inventions such as the cotton gin (1793) having made slavery immensely profitable for Southern elites. The United States annexed the Republic of Texas in 1845, and the 1846 Oregon Treaty led to U.S. control of the present-day American Northwest. Dispute with Mexico over Texas led to the Mexican–American War (1846–1848). After the victory of the U.S., Mexico recognized U.S. sovereignty over Texas, New Mexico, and California in the 1848 Mexican Cession; the cession's lands also included the future states of Nevada, Colorado and Utah. The California gold rush of 1848–1849 spurred a huge migration of white settlers to the Pacific coast, leading to even more confrontations with Native populations. One of the most violent, the California genocide of thousands of Native inhabitants, lasted into the mid-1870s. Additional western territories and states were created. Throughout the 1850s, the sectional conflict regarding slavery was further inflamed by national legislation in the U.S. Congress and decisions of the Supreme Court. In Congress, the Fugitive Slave Act of 1850 mandated the forcible return to their owners in the South of slaves taking refuge in non-slave states, while the Kansas–Nebraska Act of 1854 effectively gutted the anti-slavery requirements of the Missouri Compromise. In its Dred Scott decision of 1857, the Supreme Court ruled against a slave brought into non-slave territory, simultaneously declaring the entire Missouri Compromise to be unconstitutional. These and other events exacerbated tensions between North and South that would culminate in the American Civil War (1861–1865). Beginning with South Carolina, 11 slave-state governments voted to secede from the United States in 1861, joining to create the Confederate States of America. All other state governments remained loyal to the Union.[q] War broke out in April 1861 after the Confederacy bombarded Fort Sumter. Following the Emancipation Proclamation on January 1, 1863, many freed slaves joined the Union army. The war began to turn in the Union's favor following the 1863 Siege of Vicksburg and Battle of Gettysburg, and the Confederates surrendered in 1865 after the Union's victory in the Battle of Appomattox Court House. Efforts toward reconstruction in the secessionist South had begun as early as 1862, but it was only after President Lincoln's assassination that the three Reconstruction Amendments to the Constitution were ratified to protect civil rights. The amendments codified nationally the abolition of slavery and involuntary servitude except as punishment for crimes, promised equal protection under the law for all persons, and prohibited discrimination on the basis of race or previous enslavement. As a result, African Americans took an active political role in ex-Confederate states in the decade following the Civil War. The former Confederate states were readmitted to the Union, beginning with Tennessee in 1866 and ending with Georgia in 1870. National infrastructure, including transcontinental telegraph and railroads, spurred growth in the American frontier. This was accelerated by the Homestead Acts, through which nearly 10 percent of the total land area of the United States was given away free to some 1.6 million homesteaders. From 1865 through 1917, an unprecedented stream of immigrants arrived in the United States, including 24.4 million from Europe. Most came through the Port of New York, as New York City and other large cities on the East Coast became home to large Jewish, Irish, and Italian populations. Many Northern Europeans as well as significant numbers of Germans and other Central Europeans moved to the Midwest. At the same time, about one million French Canadians migrated from Quebec to New England. During the Great Migration, millions of African Americans left the rural South for urban areas in the North. Alaska was purchased from Russia in 1867. The Compromise of 1877 is generally considered the end of the Reconstruction era, as it resolved the electoral crisis following the 1876 presidential election and led President Rutherford B. Hayes to reduce the role of federal troops in the South. Immediately, the Redeemers began evicting the Carpetbaggers and quickly regained local control of Southern politics in the name of white supremacy. African Americans endured a period of heightened, overt racism following Reconstruction, a time often considered the nadir of American race relations. A series of Supreme Court decisions, including Plessy v. Ferguson, emptied the Fourteenth and Fifteenth Amendments of their force, allowing Jim Crow laws in the South to remain unchecked, sundown towns in the Midwest, and segregation in communities across the country, which would be reinforced in part by the policy of redlining later adopted by the federal Home Owners' Loan Corporation. An explosion of technological advancement, accompanied by the exploitation of cheap immigrant labor, led to rapid economic expansion during the Gilded Age of the late 19th century. It continued into the early 20th, when the United States already outpaced the economies of Britain, France, and Germany combined. This fostered the amassing of power by a few prominent industrialists, largely by their formation of trusts and monopolies to prevent competition. Tycoons led the nation's expansion in the railroad, petroleum, and steel industries. The United States emerged as a pioneer of the automotive industry. These changes resulted in significant increases in economic inequality, slum conditions, and social unrest, creating the environment for labor unions and socialist movements to begin to flourish. This period eventually ended with the advent of the Progressive Era, which was characterized by significant economic and social reforms. Pro-American elements in Hawaii overthrew the Hawaiian monarchy; the islands were annexed in 1898. That same year, Puerto Rico, the Philippines, and Guam were ceded to the U.S. by Spain after the latter's defeat in the Spanish–American War. (The Philippines was granted full independence from the U.S. on July 4, 1946, following World War II. Puerto Rico and Guam have remained U.S. territories.) American Samoa was acquired by the United States in 1900 after the Second Samoan Civil War. The U.S. Virgin Islands were purchased from Denmark in 1917. The United States entered World War I alongside the Allies in 1917 helping to turn the tide against the Central Powers. In 1920, a constitutional amendment granted nationwide women's suffrage. During the 1920s and 1930s, radio for mass communication and early television transformed communications nationwide. The Wall Street Crash of 1929 triggered the Great Depression, to which President Franklin D. Roosevelt responded with the New Deal plan of "reform, recovery and relief", a series of unprecedented and sweeping recovery programs and employment relief projects combined with financial reforms and regulations. Initially neutral during World War II, the U.S. began supplying war materiel to the Allies of World War II in March 1941 and entered the war in December after Japan's attack on Pearl Harbor. Agreeing to a "Europe first" policy, the U.S. concentrated its wartime efforts on Japan's allies Italy and Germany until their final defeat in May 1945. The U.S. developed the first nuclear weapons and used them against the Japanese cities of Hiroshima and Nagasaki in August 1945, ending the war. The United States was one of the "Four Policemen" who met to plan the post-war world, alongside the United Kingdom, the Soviet Union, and China. The U.S. emerged relatively unscathed from the war, with even greater economic power and international political influence. The end of World War II in 1945 left the U.S. and the Soviet Union as superpowers, each with its own political, military, and economic sphere of influence. Geopolitical tensions between the two superpowers soon led to the Cold War. The U.S. implemented a policy of containment intended to limit the Soviet Union's sphere of influence; engaged in regime change against governments perceived to be aligned with the Soviets; and prevailed in the Space Race, which culminated with the first crewed Moon landing in 1969. Domestically, the U.S. experienced economic growth, urbanization, and population growth following World War II. The civil rights movement emerged, with Martin Luther King Jr. becoming a prominent leader in the early 1960s. The Great Society plan of President Lyndon B. Johnson's administration resulted in groundbreaking and broad-reaching laws, policies and a constitutional amendment to counteract some of the worst effects of lingering institutional racism. The counterculture movement in the U.S. brought significant social changes, including the liberalization of attitudes toward recreational drug use and sexuality. It also encouraged open defiance of the military draft (leading to the end of conscription in 1973) and wide opposition to U.S. intervention in Vietnam, with the U.S. totally withdrawing in 1975. A societal shift in the roles of women was significantly responsible for the large increase in female paid labor participation starting in the 1970s, and by 1985 the majority of American women aged 16 and older were employed. The Fall of Communism and the dissolution of the Soviet Union from 1989 to 1991 marked the end of the Cold War and left the United States as the world's sole superpower. This cemented the United States' global influence, reinforcing the concept of the "American Century" as the U.S. dominated international political, cultural, economic, and military affairs. The 1990s saw the longest recorded economic expansion in American history, a dramatic decline in U.S. crime rates, and advances in technology. Throughout this decade, technological innovations such as the World Wide Web, the evolution of the Pentium microprocessor in accordance with Moore's law, rechargeable lithium-ion batteries, the first gene therapy trial, and cloning either emerged in the U.S. or were improved upon there. The Human Genome Project was formally launched in 1990, while Nasdaq became the first stock market in the United States to trade online in 1998. In the Gulf War of 1991, an American-led international coalition of states expelled an Iraqi invasion force that had occupied neighboring Kuwait. The September 11 attacks on the United States in 2001 by the pan-Islamist militant organization al-Qaeda led to the war on terror and subsequent military interventions in Afghanistan and in Iraq. The U.S. housing bubble culminated in 2007 with the Great Recession, the largest economic contraction since the Great Depression. In the 2010s and early 2020s, the United States has experienced increased political polarization and democratic backsliding. The country's polarization was violently reflected in the January 2021 Capitol attack, when a mob of insurrectionists entered the U.S. Capitol and sought to prevent the peaceful transfer of power in an attempted self-coup d'état. Geography The United States is the world's third-largest country by total area behind Russia and Canada.[c] The 48 contiguous states and the District of Columbia have a combined area of 3,119,885 square miles (8,080,470 km2). In 2021, the United States had 8% of the Earth's permanent meadows and pastures and 10% of its cropland. Starting in the east, the coastal plain of the Atlantic seaboard gives way to inland forests and rolling hills in the Piedmont plateau region. The Appalachian Mountains and the Adirondack Massif separate the East Coast from the Great Lakes and the grasslands of the Midwest. The Mississippi River System, the world's fourth-longest river system, runs predominantly north–south through the center of the country. The flat and fertile prairie of the Great Plains stretches to the west, interrupted by a highland region in the southeast. The Rocky Mountains, west of the Great Plains, extend north to south across the country, peaking at over 14,000 feet (4,300 m) in Colorado. The supervolcano underlying Yellowstone National Park in the Rocky Mountains, the Yellowstone Caldera, is the continent's largest volcanic feature. Farther west are the rocky Great Basin and the Chihuahuan, Sonoran, and Mojave deserts. In the northwest corner of Arizona, carved by the Colorado River, is the Grand Canyon, a steep-sided canyon and popular tourist destination known for its overwhelming visual size and intricate, colorful landscape. The Cascade and Sierra Nevada mountain ranges run close to the Pacific coast. The lowest and highest points in the contiguous United States are in the State of California, about 84 miles (135 km) apart. At an elevation of 20,310 feet (6,190.5 m), Alaska's Denali (also called Mount McKinley) is the highest peak in the country and on the continent. Active volcanoes in the U.S. are common throughout Alaska's Alexander and Aleutian Islands. Located entirely outside North America, the archipelago of Hawaii consists of volcanic islands, physiographically and ethnologically part of the Polynesian subregion of Oceania. In addition to its total land area, the United States has one of the world's largest marine exclusive economic zones spanning approximately 4.5 million square miles (11.7 million km2) of ocean. With its large size and geographic variety, the United States includes most climate types. East of the 100th meridian, the climate ranges from humid continental in the north to humid subtropical in the south. The western Great Plains are semi-arid. Many mountainous areas of the American West have an alpine climate. The climate is arid in the Southwest, Mediterranean in coastal California, and oceanic in coastal Oregon, Washington, and southern Alaska. Most of Alaska is subarctic or polar. Hawaii, the southern tip of Florida and U.S. territories in the Caribbean and Pacific are tropical. The United States receives more high-impact extreme weather incidents than any other country. States bordering the Gulf of Mexico are prone to hurricanes, and most of the world's tornadoes occur in the country, mainly in Tornado Alley. Due to climate change in the country, extreme weather has become more frequent in the U.S. in the 21st century, with three times the number of reported heat waves compared to the 1960s. Since the 1990s, droughts in the American Southwest have become more persistent and more severe. The regions considered as the most attractive to the population are the most vulnerable. The U.S. is one of 17 megadiverse countries containing large numbers of endemic species: about 17,000 species of vascular plants occur in the contiguous United States and Alaska, and over 1,800 species of flowering plants are found in Hawaii, few of which occur on the mainland. The United States is home to 428 mammal species, 784 birds, 311 reptiles, 295 amphibians, and around 91,000 insect species. There are 63 national parks, and hundreds of other federally managed monuments, forests, and wilderness areas, administered by the National Park Service and other agencies. About 28% of the country's land is publicly owned and federally managed, primarily in the Western States. Most of this land is protected, though some is leased for commercial use, and less than one percent is used for military purposes. Environmental issues in the United States include debates on non-renewable resources and nuclear energy, air and water pollution, biodiversity, logging and deforestation, and climate change. The U.S. Environmental Protection Agency (EPA) is the federal agency charged with addressing most environmental-related issues. The idea of wilderness has shaped the management of public lands since 1964, with the Wilderness Act. The Endangered Species Act of 1973 provides a way to protect threatened and endangered species and their habitats. The United States Fish and Wildlife Service implements and enforces the Act. In 2024, the U.S. ranked 35th among 180 countries in the Environmental Performance Index. Government and politics The United States is a federal republic of 50 states and a federal capital district, Washington, D.C. The U.S. asserts sovereignty over five unincorporated territories and several uninhabited island possessions. It is the world's oldest surviving federation, and its presidential system of federal government has been adopted, in whole or in part, by many newly independent states worldwide following their decolonization. The Constitution of the United States serves as the country's supreme legal document. Most scholars describe the United States as a liberal democracy.[r] Composed of three branches, all headquartered in Washington, D.C., the federal government is the national government of the United States. The U.S. Constitution establishes a separation of powers intended to provide a system of checks and balances to prevent any of the three branches from becoming supreme. The three-branch system is known as the presidential system, in contrast to the parliamentary system where the executive is part of the legislative body. Many countries around the world adopted this aspect of the 1789 Constitution of the United States, especially in the postcolonial Americas. In the U.S. federal system, sovereign powers are shared between three levels of government specified in the Constitution: the federal government, the states, and Indian tribes. The U.S. also asserts sovereignty over five permanently inhabited territories: American Samoa, Guam, the Northern Mariana Islands, Puerto Rico, and the U.S. Virgin Islands. Residents of the 50 states are governed by their elected state government, under state constitutions compatible with the national constitution, and by elected local governments that are administrative divisions of a state. States are subdivided into counties or county equivalents, and (except for Hawaii) further divided into municipalities, each administered by elected representatives. The District of Columbia is a federal district containing the U.S. capital, Washington, D.C. The federal district is an administrative division of the federal government. Indian country is made up of 574 federally recognized tribes and 326 Indian reservations. They hold a government-to-government relationship with the U.S. federal government in Washington and are legally defined as domestic dependent nations with inherent tribal sovereignty rights. In addition to the five major territories, the U.S. also asserts sovereignty over the United States Minor Outlying Islands in the Pacific Ocean and the Caribbean. The seven undisputed islands without permanent populations are Baker Island, Howland Island, Jarvis Island, Johnston Atoll, Kingman Reef, Midway Atoll, and Palmyra Atoll. U.S. sovereignty over the unpopulated Bajo Nuevo Bank, Navassa Island, Serranilla Bank, and Wake Island is disputed. The Constitution is silent on political parties. However, they developed independently in the 18th century with the Federalist and Anti-Federalist parties. Since then, the United States has operated as a de facto two-party system, though the parties have changed over time. Since the mid-19th century, the two main national parties have been the Democratic Party and the Republican Party. The former is perceived as relatively liberal in its political platform while the latter is perceived as relatively conservative in its platform. The United States has an established structure of foreign relations, with the world's second-largest diplomatic corps as of 2024[update]. It is a permanent member of the United Nations Security Council and home to the United Nations headquarters. The United States is a member of the G7, G20, and OECD intergovernmental organizations. Almost all countries have embassies and many have consulates (official representatives) in the country. Likewise, nearly all countries host formal diplomatic missions with the United States, except Iran, North Korea, and Bhutan. Though Taiwan does not have formal diplomatic relations with the U.S., it maintains close unofficial relations. The United States regularly supplies Taiwan with military equipment to deter potential Chinese aggression. Its geopolitical attention also turned to the Indo-Pacific when the United States joined the Quadrilateral Security Dialogue with Australia, India, and Japan. The United States has a "Special Relationship" with the United Kingdom and strong ties with Canada, Australia, New Zealand, the Philippines, Japan, South Korea, Israel, and several European Union countries such as France, Italy, Germany, Spain, and Poland. The U.S. works closely with its NATO allies on military and national security issues, and with countries in the Americas through the Organization of American States and the United States–Mexico–Canada Free Trade Agreement. The U.S. exercises full international defense authority and responsibility for Micronesia, the Marshall Islands, and Palau through the Compact of Free Association. It has increasingly conducted strategic cooperation with India, while its ties with China have steadily deteriorated. Beginning in 2014, the U.S. had become a key ally of Ukraine. After Donald Trump was elected U.S. president in 2024, he sought to negotiate an end to the Russo-Ukrainian War. He paused all military aid to Ukraine in March 2025, although the aid resumed later. Trump also ended U.S. intelligence sharing with the country, but this too was eventually restored. The president is the commander-in-chief of the United States Armed Forces and appoints its leaders, the secretary of defense and the Joint Chiefs of Staff. The Department of Defense, headquartered at the Pentagon near Washington, D.C., administers five of the six service branches, which are made up of the U.S. Army, Marine Corps, Navy, Air Force, and Space Force. The Coast Guard is administered by the Department of Homeland Security in peacetime and can be transferred to the Department of the Navy in wartime. Total strength of the entire military is about 1.3 million active duty with an additional 400,000 in reserve. The United States spent $997 billion on its military in 2024, which is by far the largest amount of any country, making up 37% of global military spending and accounting for 3.4% of the country's GDP. The U.S. possesses 42% of the world's nuclear weapons—the second-largest stockpile after that of Russia. The U.S. military is widely regarded as the most powerful and advanced in the world. The United States has the third-largest combined armed forces in the world, behind the Chinese People's Liberation Army and Indian Armed Forces. The U.S. military operates about 800 bases and facilities abroad, and maintains deployments greater than 100 active duty personnel in 25 foreign countries. The United States has engaged in over 400 military interventions since its founding in 1776, with over half of these occurring between 1950 and 2019 and 25% occurring in the post-Cold War era. State defense forces (SDFs) are military units that operate under the sole authority of a state government. SDFs are authorized by state and federal law but are under the command of the state's governor. By contrast, the 54 U.S. National Guard organizations[t] fall under the dual control of state or territorial governments and the federal government; their units can also become federalized entities, but SDFs cannot be federalized. The National Guard personnel of a state or territory can be federalized by the president under the National Defense Act Amendments of 1933; this legislation created the Guard and provides for the integration of Army National Guard and Air National Guard units and personnel into the U.S. Army and (since 1947) the U.S. Air Force. The total number of National Guard members is about 430,000, while the estimated combined strength of SDFs is less than 10,000. There are about 18,000 U.S. police agencies from local to national level in the United States. Law in the United States is mainly enforced by local police departments and sheriff departments in their municipal or county jurisdictions. The state police departments have authority in their respective state, and federal agencies such as the Federal Bureau of Investigation (FBI) and the U.S. Marshals Service have national jurisdiction and specialized duties, such as protecting civil rights, national security, enforcing U.S. federal courts' rulings and federal laws, and interstate criminal activity. State courts conduct almost all civil and criminal trials, while federal courts adjudicate the much smaller number of civil and criminal cases that relate to federal law. There is no unified "criminal justice system" in the United States. The American prison system is largely heterogenous, with thousands of relatively independent systems operating across federal, state, local, and tribal levels. In 2025, "these systems hold nearly 2 million people in 1,566 state prisons, 98 federal prisons, 3,116 local jails, 1,277 juvenile correctional facilities, 133 immigration detention facilities, and 80 Indian country jails, as well as in military prisons, civil commitment centers, state psychiatric hospitals, and prisons in the U.S. territories." Despite disparate systems of confinement, four main institutions dominate: federal prisons, state prisons, local jails, and juvenile correctional facilities. Federal prisons are run by the Federal Bureau of Prisons and hold pretrial detainees as well as people who have been convicted of federal crimes. State prisons, run by the department of corrections of each state, hold people sentenced and serving prison time (usually longer than one year) for felony offenses. Local jails are county or municipal facilities that incarcerate defendants prior to trial; they also hold those serving short sentences (typically under a year). Juvenile correctional facilities are operated by local or state governments and serve as longer-term placements for any minor adjudicated as delinquent and ordered by a judge to be confined. In January 2023, the United States had the sixth-highest per capita incarceration rate in the world—531 people per 100,000 inhabitants—and the largest prison and jail population in the world, with more than 1.9 million people incarcerated. An analysis of the World Health Organization Mortality Database from 2010 showed U.S. homicide rates "were 7 times higher than in other high-income countries, driven by a gun homicide rate that was 25 times higher". Economy The U.S. has a highly developed mixed economy that has been the world's largest nominally since about 1890. Its 2024 gross domestic product (GDP)[e] of more than $29 trillion constituted over 25% of nominal global economic output, or 15% at purchasing power parity (PPP). From 1983 to 2008, U.S. real compounded annual GDP growth was 3.3%, compared to a 2.3% weighted average for the rest of the G7. The country ranks first in the world by nominal GDP, second when adjusted for purchasing power parities (PPP), and ninth by PPP-adjusted GDP per capita. In February 2024, the total U.S. federal government debt was $34.4 trillion. Of the world's 500 largest companies by revenue, 138 were headquartered in the U.S. in 2025, the highest number of any country. The U.S. dollar is the currency most used in international transactions and the world's foremost reserve currency, backed by the country's dominant economy, its military, the petrodollar system, its large U.S. treasuries market, and its linked eurodollar. Several countries use it as their official currency, and in others it is the de facto currency. The U.S. has free trade agreements with several countries, including the USMCA. Although the United States has reached a post-industrial level of economic development and is often described as having a service economy, it remains a major industrial power; in 2024, the U.S. manufacturing sector was the world's second-largest by value output after China's. New York City is the world's principal financial center, and its metropolitan area is the world's largest metropolitan economy. The New York Stock Exchange and Nasdaq, both located in New York City, are the world's two largest stock exchanges by market capitalization and trade volume. The United States is at the forefront of technological advancement and innovation in many economic fields, especially in artificial intelligence; electronics and computers; pharmaceuticals; and medical, aerospace and military equipment. The country's economy is fueled by abundant natural resources, a well-developed infrastructure, and high productivity. The largest trading partners of the United States are the European Union, Mexico, Canada, China, Japan, South Korea, the United Kingdom, Vietnam, India, and Taiwan. The United States is the world's largest importer and second-largest exporter.[u] It is by far the world's largest exporter of services. Americans have the highest average household and employee income among OECD member states, and the fourth-highest median household income in 2023, up from sixth-highest in 2013. With personal consumption expenditures of over $18.5 trillion in 2023, the U.S. has a heavily consumer-driven economy and is the world's largest consumer market. The U.S. ranked first in the number of dollar billionaires and millionaires in 2023, with 735 billionaires and nearly 22 million millionaires. Wealth in the United States is highly concentrated; in 2011, the richest 10% of the adult population owned 72% of the country's household wealth, while the bottom 50% owned just 2%. U.S. wealth inequality increased substantially since the late 1980s, and income inequality in the U.S. reached a record high in 2019. In 2024, the country had some of the highest wealth and income inequality levels among OECD countries. Since the 1970s, there has been a decoupling of U.S. wage gains from worker productivity. In 2016, the top fifth of earners took home more than half of all income, giving the U.S. one of the widest income distributions among OECD countries. There were about 771,480 homeless persons in the U.S. in 2024. In 2022, 6.4 million children experienced food insecurity. Feeding America estimates that around one in five, or approximately 13 million, children experience hunger in the U.S. and do not know where or when they will get their next meal. Also in 2022, about 37.9 million people, or 11.5% of the U.S. population, were living in poverty. The United States has a smaller welfare state and redistributes less income through government action than most other high-income countries. It is the only advanced economy that does not guarantee its workers paid vacation nationally and one of a few countries in the world without federal paid family leave as a legal right. The United States has a higher percentage of low-income workers than almost any other developed country, largely because of a weak collective bargaining system and lack of government support for at-risk workers. The United States has been a leader in technological innovation since the late 19th century and scientific research since the mid-20th century. Methods for producing interchangeable parts and the establishment of a machine tool industry enabled the large-scale manufacturing of U.S. consumer products in the late 19th century. By the early 20th century, factory electrification, the introduction of the assembly line, and other labor-saving techniques created the system of mass production. In the 21st century, the United States continues to be one of the world's foremost scientific powers, though China has emerged as a major competitor in many fields. The U.S. has the highest research and development expenditures of any country and ranks ninth as a percentage of GDP. In 2022, the United States was (after China) the country with the second-highest number of published scientific papers. In 2021, the U.S. ranked second (also after China) by the number of patent applications, and third by trademark and industrial design applications (after China and Germany), according to World Intellectual Property Indicators. In 2025 the United States ranked third (after Switzerland and Sweden) in the Global Innovation Index. The United States is considered to be a world leader in the development of artificial intelligence technology. In 2023, the United States was ranked the second most technologically advanced country in the world (after South Korea) by Global Finance magazine. The United States has maintained a space program since the late 1950s, beginning with the establishment of the National Aeronautics and Space Administration (NASA) in 1958. NASA's Apollo program (1961–1972) achieved the first crewed Moon landing with the 1969 Apollo 11 mission; it remains one of the agency's most significant milestones. Other major endeavors by NASA include the Space Shuttle program (1981–2011), the Voyager program (1972–present), the Hubble and James Webb space telescopes (launched in 1990 and 2021, respectively), and the multi-mission Mars Exploration Program (Spirit and Opportunity, Curiosity, and Perseverance). NASA is one of five agencies collaborating on the International Space Station (ISS); U.S. contributions to the ISS include several modules, including Destiny (2001), Harmony (2007), and Tranquility (2010), as well as ongoing logistical and operational support. The United States private sector dominates the global commercial spaceflight industry. Prominent American spaceflight contractors include Blue Origin, Boeing, Lockheed Martin, Northrop Grumman, and SpaceX. NASA programs such as the Commercial Crew Program, Commercial Resupply Services, Commercial Lunar Payload Services, and NextSTEP have facilitated growing private-sector involvement in American spaceflight. In 2023, the United States received approximately 84% of its energy from fossil fuel, and its largest source of energy was petroleum (38%), followed by natural gas (36%), renewable sources (9%), coal (9%), and nuclear power (9%). In 2022, the United States constituted about 4% of the world's population, but consumed around 16% of the world's energy. The U.S. ranks as the second-highest emitter of greenhouse gases behind China. The U.S. is the world's largest producer of nuclear power, generating around 30% of the world's nuclear electricity. It also has the highest number of nuclear power reactors of any country. From 2024, the U.S. plans to triple its nuclear power capacity by 2050. The United States' 4 million miles (6.4 million kilometers) of road network, owned almost entirely by state and local governments, is the longest in the world. The extensive Interstate Highway System that connects all major U.S. cities is funded mostly by the federal government but maintained by state departments of transportation. The system is further extended by state highways and some private toll roads. The U.S. is among the top ten countries with the highest vehicle ownership per capita (850 vehicles per 1,000 people) in 2022. A 2022 study found that 76% of U.S. commuters drive alone and 14% ride a bicycle, including bike owners and users of bike-sharing networks. About 11% use some form of public transportation. Public transportation in the United States is well developed in the largest urban areas, notably New York City, Washington, D.C., Boston, Philadelphia, Chicago, and San Francisco; otherwise, coverage is generally less extensive than in most other developed countries. The U.S. also has many relatively car-dependent localities. Long-distance intercity travel is provided primarily by airlines, but travel by rail is more common along the Northeast Corridor, the only high-speed rail in the U.S. that meets international standards. Amtrak, the country's government-sponsored national passenger rail company, has a relatively sparse network compared to that of Western European countries. Service is concentrated in the Northeast, California, the Midwest, the Pacific Northwest, and Virginia/Southeast. The United States has an extensive air transportation network. U.S. civilian airlines are all privately owned. The three largest airlines in the world, by total number of passengers carried, are U.S.-based; American Airlines became the global leader after its 2013 merger with US Airways. Of the 50 busiest airports in the world, 16 are in the United States, as well as five of the top 10. The world's busiest airport by passenger volume is Hartsfield–Jackson Atlanta International in Atlanta, Georgia. In 2022, most of the 19,969 U.S. airports were owned and operated by local government authorities, and there are also some private airports. Some 5,193 are designated as "public use", including for general aviation. The Transportation Security Administration (TSA) has provided security at most major airports since 2001. The country's rail transport network, the longest in the world at 182,412.3 mi (293,564.2 km), handles mostly freight (in contrast to more passenger-centered rail in Europe). Because they are often privately owned operations, U.S. railroads lag behind those of the rest of the world in terms of electrification. The country's inland waterways are the world's fifth-longest, totaling 25,482 mi (41,009 km). They are used extensively for freight, recreation, and a small amount of passenger traffic. Of the world's 50 busiest container ports, four are located in the United States, with the busiest in the country being the Port of Los Angeles. Demographics The U.S. Census Bureau reported 331,449,281 residents on April 1, 2020,[v] making the United States the third-most-populous country in the world, after India and China. The Census Bureau's official 2025 population estimate was 341,784,857, an increase of 3.1% since the 2020 census. According to the Bureau's U.S. Population Clock, on July 1, 2024, the U.S. population had a net gain of one person every 16 seconds, or about 5400 people per day. In 2023, 51% of Americans age 15 and over were married, 6% were widowed, 10% were divorced, and 34% had never been married. In 2023, the total fertility rate for the U.S. stood at 1.6 children per woman, and, at 23%, it had the world's highest rate of children living in single-parent households in 2019. Most Americans live in the suburbs of major metropolitan areas. The United States has a diverse population; 37 ancestry groups have more than one million members. White Americans with ancestry from Europe, the Middle East, or North Africa form the largest racial and ethnic group at 57.8% of the United States population. Hispanic and Latino Americans form the second-largest group and are 18.7% of the United States population. African Americans constitute the country's third-largest ancestry group and are 12.1% of the total U.S. population. Asian Americans are the country's fourth-largest group, composing 5.9% of the United States population. The country's 3.7 million Native Americans account for about 1%, and some 574 native tribes are recognized by the federal government. In 2024, the median age of the United States population was 39.1 years. While many languages and dialects are spoken in the United States, English is by far the most commonly spoken and written. De facto, English is the official language of the United States, and in 2025, Executive Order 14224 declared English official. However, the U.S. has never had a de jure official language, as Congress has never passed a law to designate English as official for all three federal branches. Some laws, such as U.S. naturalization requirements, nonetheless standardize English. Twenty-eight states and the United States Virgin Islands have laws that designate English as the sole official language; 19 states and the District of Columbia have no official language. Three states and four U.S. territories have recognized local or indigenous languages in addition to English: Hawaii (Hawaiian), Alaska (twenty Native languages),[w] South Dakota (Sioux), American Samoa (Samoan), Puerto Rico (Spanish), Guam (Chamorro), and the Northern Mariana Islands (Carolinian and Chamorro). In total, 169 Native American languages are spoken in the United States. In Puerto Rico, Spanish is more widely spoken than English. According to the American Community Survey (2020), some 245.4 million people in the U.S. age five and older spoke only English at home. About 41.2 million spoke Spanish at home, making it the second most commonly used language. Other languages spoken at home by one million people or more include Chinese (3.40 million), Tagalog (1.71 million), Vietnamese (1.52 million), Arabic (1.39 million), French (1.18 million), Korean (1.07 million), and Russian (1.04 million). German, spoken by 1 million people at home in 2010, fell to 857,000 total speakers in 2020. America's immigrant population is by far the world's largest in absolute terms. In 2022, there were 87.7 million immigrants and U.S.-born children of immigrants in the United States, accounting for nearly 27% of the overall U.S. population. In 2017, out of the U.S. foreign-born population, some 45% (20.7 million) were naturalized citizens, 27% (12.3 million) were lawful permanent residents, 6% (2.2 million) were temporary lawful residents, and 23% (10.5 million) were unauthorized immigrants. In 2019, the top countries of origin for immigrants were Mexico (24% of immigrants), India (6%), China (5%), the Philippines (4.5%), and El Salvador (3%). In fiscal year 2022, over one million immigrants (most of whom entered through family reunification) were granted legal residence. The undocumented immigrant population in the U.S. reached a record high of 14 million in 2023. The First Amendment guarantees the free exercise of religion in the country and forbids Congress from passing laws respecting its establishment. Religious practice is widespread, among the most diverse in the world, and profoundly vibrant. The country has the world's largest Christian population, which includes the fourth-largest population of Catholics. Other notable faiths include Judaism, Buddhism, Hinduism, Islam, New Age, and Native American religions. Religious practice varies significantly by region. "Ceremonial deism" is common in American culture. The overwhelming majority of Americans believe in a higher power or spiritual force, engage in spiritual practices such as prayer, and consider themselves religious or spiritual. In the Southern United States' "Bible Belt", evangelical Protestantism plays a significant role culturally; New England and the Western United States tend to be more secular. Mormonism, a Restorationist movement founded in the U.S. in 1847, is the predominant religion in Utah and a major religion in Idaho. About 82% of Americans live in metropolitan areas, particularly in suburbs; about half of those reside in cities with populations over 50,000. In 2022, 333 incorporated municipalities had populations over 100,000, nine cities had more than one million residents, and four cities—New York City, Los Angeles, Chicago, and Houston—had populations exceeding two million. Many U.S. metropolitan populations are growing rapidly, particularly in the South and West. According to the Centers for Disease Control and Prevention (CDC), average U.S. life expectancy at birth reached 79.0 years in 2024, its highest recorded level. This was an increase of 0.6 years over 2023. The CDC attributed the improvement to a significant fall in the number of fatal drug overdoses in the country, noting that "heart disease continues to be the leading cause of death in the United States, followed by cancer and unintentional injuries." In 2024, life expectancy at birth for American men rose to 76.5 years (+0.7 years compared to 2023), while life expectancy for women was 81.4 years (+0.3 years). Starting in 1998, life expectancy in the U.S. fell behind that of other wealthy industrialized countries, and Americans' "health disadvantage" gap has been increasing ever since. The Commonwealth Fund reported in 2020 that the U.S. had the highest suicide rate among high-income countries. Approximately one-third of the U.S. adult population is obese and another third is overweight. The U.S. healthcare system far outspends that of any other country, measured both in per capita spending and as a percentage of GDP, but attains worse healthcare outcomes when compared to peer countries for reasons that are debated. The United States is the only developed country without a system of universal healthcare, and a significant proportion of the population that does not carry health insurance. Government-funded healthcare coverage for the poor (Medicaid) and for those age 65 and older (Medicare) is available to Americans who meet the programs' income or age qualifications. In 2010, then-President Obama passed the Patient Protection and Affordable Care Act.[x] Abortion in the United States is not federally protected, and is illegal or restricted in 17 states. American primary and secondary education, known in the U.S. as K–12 ("kindergarten through 12th grade"), is decentralized. School systems are operated by state, territorial, and sometimes municipal governments and regulated by the U.S. Department of Education. In general, children are required to attend school or an approved homeschool from the age of five or six (kindergarten or first grade) until they are 18 years old. This often brings students through the 12th grade, the final year of a U.S. high school, but some states and territories allow them to leave school earlier, at age 16 or 17. The U.S. spends more on education per student than any other country, an average of $18,614 per year per public elementary and secondary school student in 2020–2021. Among Americans age 25 and older, 92.2% graduated from high school, 62.7% attended some college, 37.7% earned a bachelor's degree, and 14.2% earned a graduate degree. The U.S. literacy rate is near-universal. The U.S. has produced the most Nobel Prize winners of any country, with 411 (having won 413 awards). U.S. tertiary or higher education has earned a global reputation. Many of the world's top universities, as listed by various ranking organizations, are in the United States, including 19 of the top 25. American higher education is dominated by state university systems, although the country's many private universities and colleges enroll about 20% of all American students. Local community colleges generally offer open admissions, lower tuition, and coursework leading to a two-year associate degree or a non-degree certificate. As for public expenditures on higher education, the U.S. spends more per student than the OECD average, and Americans spend more than all nations in combined public and private spending. Colleges and universities directly funded by the federal government do not charge tuition and are limited to military personnel and government employees, including: the U.S. service academies, the Naval Postgraduate School, and military staff colleges. Despite some student loan forgiveness programs in place, student loan debt increased by 102% between 2010 and 2020, and exceeded $1.7 trillion in 2022. Culture and society The United States is home to a wide variety of ethnic groups, traditions, and customs. The country has been described as having the values of individualism and personal autonomy, as well as a strong work ethic and competitiveness. Voluntary altruism towards others also plays a major role; according to a 2016 study by the Charities Aid Foundation, Americans donated 1.44% of total GDP to charity—the highest rate in the world by a large margin. Americans have traditionally been characterized by a unifying political belief in an "American Creed" emphasizing consent of the governed, liberty, equality under the law, democracy, social equality, property rights, and a preference for limited government. The U.S. has acquired significant hard and soft power through its diplomatic influence, economic power, military alliances, and cultural exports such as American movies, music, video games, sports, and food. The influence that the United States exerts on other countries through soft power is referred to as Americanization. Nearly all present Americans or their ancestors came from Europe, Africa, or Asia (the "Old World") within the past five centuries. Mainstream American culture is a Western culture largely derived from the traditions of European immigrants with influences from many other sources, such as traditions brought by slaves from Africa. More recent immigration from Asia and especially Latin America has added to a cultural mix that has been described as a homogenizing melting pot, and a heterogeneous salad bowl, with immigrants contributing to, and often assimilating into, mainstream American culture. Under the First Amendment to the Constitution, the United States is considered to have the strongest protections of free speech of any country. Flag desecration, hate speech, blasphemy, and lese majesty are all forms of protected expression. A 2016 Pew Research Center poll found that Americans were the most supportive of free expression of any polity measured. Additionally, they are the "most supportive of freedom of the press and the right to use the Internet without government censorship". The U.S. is a socially progressive country with permissive attitudes surrounding human sexuality. LGBTQ rights in the United States are among the most advanced by global standards. The American Dream, or the perception that Americans enjoy high levels of social mobility, plays a key role in attracting immigrants. Whether this perception is accurate has been a topic of debate. While mainstream culture holds that the United States is a classless society, scholars identify significant differences between the country's social classes, affecting socialization, language, and values. Americans tend to greatly value socioeconomic achievement, but being ordinary or average is promoted by some as a noble condition as well. The National Foundation on the Arts and the Humanities is an agency of the United States federal government that was established in 1965 with the purpose to "develop and promote a broadly conceived national policy of support for the humanities and the arts in the United States, and for institutions which preserve the cultural heritage of the United States." It is composed of four sub-agencies: Colonial American authors were influenced by John Locke and other Enlightenment philosophers. The American Revolutionary Period (1765–1783) is notable for the political writings of Benjamin Franklin, Alexander Hamilton, Thomas Paine, and Thomas Jefferson. Shortly before and after the Revolutionary War, the newspaper rose to prominence, filling a demand for anti-British national literature. An early novel is William Hill Brown's The Power of Sympathy, published in 1791. Writer and critic John Neal in the early- to mid-19th century helped advance America toward a unique literature and culture by criticizing predecessors such as Washington Irving for imitating their British counterparts, and by influencing writers such as Edgar Allan Poe, who took American poetry and short fiction in new directions. Ralph Waldo Emerson and Margaret Fuller pioneered the influential Transcendentalism movement; Henry David Thoreau, author of Walden, was influenced by this movement. The conflict surrounding abolitionism inspired writers, like Harriet Beecher Stowe, and authors of slave narratives, such as Frederick Douglass. Nathaniel Hawthorne's The Scarlet Letter (1850) explored the dark side of American history, as did Herman Melville's Moby-Dick (1851). Major American poets of the 19th century American Renaissance include Walt Whitman, Melville, and Emily Dickinson. Mark Twain was the first major American writer to be born in the West. Henry James achieved international recognition with novels like The Portrait of a Lady (1881). As literacy rates rose, periodicals published more stories centered around industrial workers, women, and the rural poor. Naturalism, regionalism, and realism were the major literary movements of the period. While modernism generally took on an international character, modernist authors working within the United States more often rooted their work in specific regions, peoples, and cultures. Following the Great Migration to northern cities, African-American and black West Indian authors of the Harlem Renaissance developed an independent tradition of literature that rebuked a history of inequality and celebrated black culture. An important cultural export during the Jazz Age, these writings were a key influence on Négritude, a philosophy emerging in the 1930s among francophone writers of the African diaspora. In the 1950s, an ideal of homogeneity led many authors to attempt to write the Great American Novel, while the Beat Generation rejected this conformity, using styles that elevated the impact of the spoken word over mechanics to describe drug use, sexuality, and the failings of society. Contemporary literature is more pluralistic than in previous eras, with the closest thing to a unifying feature being a trend toward self-conscious experiments with language. Twelve American laureates have won the Nobel Prize in Literature. Media in the United States is broadly uncensored, with the First Amendment providing significant protections, as reiterated in New York Times Co. v. United States. The four major broadcasters in the U.S. are the National Broadcasting Company (NBC), Columbia Broadcasting System (CBS), American Broadcasting Company (ABC), and Fox Broadcasting Company (Fox). The four major broadcast television networks are all commercial entities. The U.S. cable television system offers hundreds of channels catering to a variety of niches. In 2021, about 83% of Americans over age 12 listened to broadcast radio, while about 40% listened to podcasts. In the prior year, there were 15,460 licensed full-power radio stations in the U.S. according to the Federal Communications Commission (FCC). Much of the public radio broadcasting is supplied by National Public Radio (NPR), incorporated in February 1970 under the Public Broadcasting Act of 1967. U.S. newspapers with a global reach and reputation include The Wall Street Journal, The New York Times, The Washington Post, and USA Today. About 800 publications are produced in Spanish. With few exceptions, newspapers are privately owned, either by large chains such as Gannett or McClatchy, which own dozens or even hundreds of newspapers; by small chains that own a handful of papers; or, in an increasingly rare situation, by individuals or families. Major cities often have alternative newspapers to complement the mainstream daily papers, such as The Village Voice in New York City and LA Weekly in Los Angeles. The five most-visited websites in the world are Google, YouTube, Facebook, Instagram, and ChatGPT—all of them American-owned. Other popular platforms used include X (formerly Twitter) and Amazon. In 2025, the U.S. was the world's second-largest video game market by revenue (after China). In 2015, the U.S. video game industry consisted of 2,457 companies that employed around 220,000 jobs and generated $30.4 billion in revenue. There are 444 game publishers, developers, and hardware companies in California alone. According to the Game Developers Conference (GDC), the U.S. is the top location for video game development, with 58% of the world's game developers based there in 2025. The United States is well known for its theater. Mainstream theater in the United States derives from the old European theatrical tradition and has been heavily influenced by the British theater. By the middle of the 19th century, America had created new distinct dramatic forms in the Tom Shows, the showboat theater and the minstrel show. The central hub of the American theater scene is the Theater District in Manhattan, with its divisions of Broadway, off-Broadway, and off-off-Broadway. Many movie and television celebrities have gotten their big break working in New York productions. Outside New York City, many cities have professional regional or resident theater companies that produce their own seasons. The biggest-budget theatrical productions are musicals. U.S. theater has an active community theater culture. The Tony Awards recognizes excellence in live Broadway theater and are presented at an annual ceremony in Manhattan. The awards are given for Broadway productions and performances. One is also given for regional theater. Several discretionary non-competitive awards are given as well, including a Special Tony Award, the Tony Honors for Excellence in Theatre, and the Isabelle Stevenson Award. Folk art in colonial America grew out of artisanal craftsmanship in communities that allowed commonly trained people to individually express themselves. It was distinct from Europe's tradition of high art, which was less accessible and generally less relevant to early American settlers. Cultural movements in art and craftsmanship in colonial America generally lagged behind those of Western Europe. For example, the prevailing medieval style of woodworking and primitive sculpture became integral to early American folk art, despite the emergence of Renaissance styles in England in the late 16th and early 17th centuries. The new English styles would have been early enough to make a considerable impact on American folk art, but American styles and forms had already been firmly adopted. Not only did styles change slowly in early America, but there was a tendency for rural artisans there to continue their traditional forms longer than their urban counterparts did—and far longer than those in Western Europe. The Hudson River School was a mid-19th-century movement in the visual arts tradition of European naturalism. The 1913 Armory Show in New York City, an exhibition of European modernist art, shocked the public and transformed the U.S. art scene. American Realism and American Regionalism sought to reflect and give America new ways of looking at itself. Georgia O'Keeffe, Marsden Hartley, and others experimented with new and individualistic styles, which would become known as American modernism. Major artistic movements such as the abstract expressionism of Jackson Pollock and Willem de Kooning and the pop art of Andy Warhol and Roy Lichtenstein developed largely in the United States. Major photographers include Alfred Stieglitz, Edward Steichen, Dorothea Lange, Edward Weston, James Van Der Zee, Ansel Adams, and Gordon Parks. The tide of modernism and then postmodernism has brought global fame to American architects, including Frank Lloyd Wright, Philip Johnson, and Frank Gehry. The Metropolitan Museum of Art in Manhattan is the largest art museum in the United States and the fourth-largest in the world. American folk music encompasses numerous music genres, variously known as traditional music, traditional folk music, contemporary folk music, or roots music. Many traditional songs have been sung within the same family or folk group for generations, and sometimes trace back to such origins as the British Isles, mainland Europe, or Africa. The rhythmic and lyrical styles of African-American music in particular have influenced American music. Banjos were brought to America through the slave trade. Minstrel shows incorporating the instrument into their acts led to its increased popularity and widespread production in the 19th century. The electric guitar, first invented in the 1930s, and mass-produced by the 1940s, had an enormous influence on popular music, in particular due to the development of rock and roll. The synthesizer, turntablism, and electronic music were also largely developed in the U.S. Elements from folk idioms such as the blues and old-time music were adopted and transformed into popular genres with global audiences. Jazz grew from blues and ragtime in the early 20th century, developing from the innovations and recordings of composers such as W.C. Handy and Jelly Roll Morton. Louis Armstrong and Duke Ellington increased its popularity early in the 20th century. Country music developed in the 1920s, bluegrass and rhythm and blues in the 1940s, and rock and roll in the 1950s. In the 1960s, Bob Dylan emerged from the folk revival to become one of the country's most celebrated songwriters. The musical forms of punk and hip hop both originated in the United States in the 1970s. The United States has the world's largest music market, with a total retail value of $15.9 billion in 2022. Most of the world's major record companies are based in the U.S.; they are represented by the Recording Industry Association of America (RIAA). Mid-20th-century American pop stars, such as Frank Sinatra and Elvis Presley, became global celebrities and best-selling music artists, as have artists of the late 20th century, such as Michael Jackson, Madonna, Whitney Houston, and Mariah Carey, and of the early 21st century, such as Eminem, Britney Spears, Lady Gaga, Katy Perry, Taylor Swift and Beyoncé. The United States has the world's largest apparel market by revenue. Apart from professional business attire, American fashion is eclectic and predominantly informal. Americans' diverse cultural roots are reflected in their clothing; however, sneakers, jeans, T-shirts, and baseball caps are emblematic of American styles. New York, with its Fashion Week, is considered to be one of the "Big Four" global fashion capitals, along with Paris, Milan, and London. A study demonstrated that general proximity to Manhattan's Garment District has been synonymous with American fashion since its inception in the early 20th century. A number of well-known designer labels, among them Tommy Hilfiger, Ralph Lauren, Tom Ford and Calvin Klein, are headquartered in Manhattan. Labels cater to niche markets, such as preteens. New York Fashion Week is one of the most influential fashion shows in the world, and is held twice each year in Manhattan; the annual Met Gala, also in Manhattan, has been called the fashion world's "biggest night". The U.S. film industry has a worldwide influence and following. Hollywood, a district in central Los Angeles, the nation's second-most populous city, is also metonymous for the American filmmaking industry. The major film studios of the United States are the primary source of the most commercially successful movies selling the most tickets in the world. Largely centered in the New York City region from its beginnings in the late 19th century through the first decades of the 20th century, the U.S. film industry has since been primarily based in and around Hollywood. Nonetheless, American film companies have been subject to the forces of globalization in the 21st century, and an increasing number of films are made elsewhere. The Academy Awards, popularly known as "the Oscars", have been held annually by the Academy of Motion Picture Arts and Sciences since 1929, and the Golden Globe Awards have been held annually since January 1944. The industry peaked in what is commonly referred to as the "Golden Age of Hollywood", from the early sound period until the early 1960s, with screen actors such as John Wayne and Marilyn Monroe becoming iconic figures. In the 1970s, "New Hollywood", or the "Hollywood Renaissance", was defined by grittier films influenced by French and Italian realist pictures of the post-war period. The 21st century has been marked by the rise of American streaming platforms, which came to rival traditional cinema. Early settlers were introduced by Native Americans to foods such as turkey, sweet potatoes, corn, squash, and maple syrup. Of the most enduring and pervasive examples are variations of the native dish called succotash. Early settlers and later immigrants combined these with foods they were familiar with, such as wheat flour, beef, and milk, to create a distinctive American cuisine. New World crops, especially pumpkin, corn, potatoes, and turkey as the main course are part of a shared national menu on Thanksgiving, when many Americans prepare or purchase traditional dishes to celebrate the occasion. Characteristic American dishes such as apple pie, fried chicken, doughnuts, french fries, macaroni and cheese, ice cream, hamburgers, hot dogs, and American pizza derive from the recipes of various immigrant groups. Mexican dishes such as burritos and tacos preexisted the United States in areas later annexed from Mexico, and adaptations of Chinese cuisine as well as pasta dishes freely adapted from Italian sources are all widely consumed. American chefs have had a significant impact on society both domestically and internationally. In 1946, the Culinary Institute of America was founded by Katharine Angell and Frances Roth. This would become the United States' most prestigious culinary school, where many of the most talented American chefs would study prior to successful careers. The United States restaurant industry was projected at $899 billion in sales for 2020, and employed more than 15 million people, representing 10% of the nation's workforce directly. It is the country's second-largest private employer and the third-largest employer overall. The United States is home to over 220 Michelin star-rated restaurants, 70 of which are in New York City. Wine has been produced in what is now the United States since the 1500s, with the first widespread production beginning in what is now New Mexico in 1628. In the modern U.S., wine production is undertaken in all fifty states, with California producing 84 percent of all U.S. wine. With more than 1,100,000 acres (4,500 km2) under vine, the United States is the fourth-largest wine-producing country in the world, after Italy, Spain, and France. The classic American diner, a casual restaurant type originally intended for the working class, emerged during the 19th century from converted railroad dining cars made stationary. The diner soon evolved into purpose-built structures whose number expanded greatly in the 20th century. The American fast-food industry developed alongside the nation's car culture. American restaurants developed the drive-in format in the 1920s, which they began to replace with the drive-through format by the 1940s. American fast-food restaurant chains, such as McDonald's, Burger King, Chick-fil-A, Kentucky Fried Chicken, Dunkin' Donuts and many others, have numerous outlets around the world. The most popular spectator sports in the U.S. are American football, basketball, baseball, soccer, and ice hockey. Their premier leagues are, respectively, the National Football League, the National Basketball Association, Major League Baseball, Major League Soccer, and the National Hockey League, All these leagues enjoy wide-ranging domestic media coverage and, except for the MLS, all are considered the preeminent leagues in their respective sports in the world. While most major U.S. sports such as baseball and American football have evolved out of European practices, basketball, volleyball, skateboarding, and snowboarding are American inventions, many of which have become popular worldwide. Lacrosse and surfing arose from Native American and Native Hawaiian activities that predate European contact. The market for professional sports in the United States was approximately $69 billion in July 2013, roughly 50% larger than that of Europe, the Middle East, and Africa combined. American football is by several measures the most popular spectator sport in the United States. Although American football does not have a substantial following in other nations, the NFL does have the highest average attendance (67,254) of any professional sports league in the world. In the year 2024, the NFL generated over $23 billion, making them the most valued professional sports league in the United States and the world. Baseball has been regarded as the U.S. "national sport" since the late 19th century. The most-watched individual sports in the U.S. are golf and auto racing, particularly NASCAR and IndyCar. On the collegiate level, earnings for the member institutions exceed $1 billion annually, and college football and basketball attract large audiences, as the NCAA March Madness tournament and the College Football Playoff are some of the most watched national sporting events. In the U.S., the intercollegiate sports level serves as the main feeder system for professional and Olympic sports, with significant exceptions such as Minor League Baseball. This differs greatly from practices in nearly all other countries, where publicly and privately funded sports organizations serve this function. Eight Olympic Games have taken place in the United States. The 1904 Summer Olympics in St. Louis, Missouri, were the first-ever Olympic Games held outside of Europe. The Olympic Games will be held in the U.S. for a ninth time when Los Angeles hosts the 2028 Summer Olympics. U.S. athletes have won a total of 2,968 medals (1,179 gold) at the Olympic Games, the most of any country. In other international competition, the United States is the home of a number of prestigious events, including the America's Cup, World Baseball Classic, the U.S. Open, and the Masters Tournament. The U.S. men's national soccer team has qualified for eleven World Cups, while the women's national team has won the FIFA Women's World Cup and Olympic soccer tournament four and five times, respectively. The 1999 FIFA Women's World Cup was hosted by the United States. Its final match was attended by 90,185, setting the world record for largest women's sporting event crowd at the time. The United States hosted the 1994 FIFA World Cup and will co-host, along with Canada and Mexico, the 2026 FIFA World Cup. See also Notes References This article incorporates text from a free content work. Licensed under CC BY-SA IGO 3.0 (license statement/permission). Text taken from World Food and Agriculture – Statistical Yearbook 2023​, FAO, FAO. External links 40°N 100°W / 40°N 100°W / 40; -100 (United States of America)
========================================
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_Mouse] | [TOKENS: 220]
Contents PlayStation Mouse The PlayStation Mouse is an input device for the PlayStation. The mouse was released in Japan on December 3, 1994, the launch date of the PlayStation. The mouse is a simple two-button ball mouse that plugs directly into the PlayStation controller port without adapters or conversions and is a fully supported Sony accessory. It was packaged along with a mouse mat bearing the PlayStation logo. The mouse is mainly used in point-and-click adventures, strategy games, simulation games and visual novels. In later years, first-person shooters which make use of the peripheral to aim the player's view in the same manner as similar games on the PC were released. It is also used by the arcade light gun shooting game Area 51 as an aiming device instead of a light gun compatibility. A special Konami-branded edition of the mouse was released alongside the Japanese exclusive title Tokimeki Memorial: Forever With You. Mouse packs for Disney's Winnie the Pooh Kindergarten and Disney's Winnie the Pooh Preschool were also released exclusively in Japan. List of games compatible with the PlayStation Mouse Notes References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Strongtalk] | [TOKENS: 401]
Contents Strongtalk In computing, Strongtalk is a Smalltalk environment with optional static typing support. Strongtalk can make some compile time checks, and offer stronger type safety guarantees; this is the source of its name. It is non-commercial, though it was originally a commercial project developed by a small startup company named LongView Technologies (trading as Animorphic Systems). History David Griswold wanted to use Smalltalk more extensively, but then-extant implementations were insufficient for his needs. He wanted to improve the performance, add type-checking, and use native graphical user interface (GUI) widgets. His efforts resulted in the 1993 paper he co-authored with Gilad Bracha. This version was based on adding type-checking to the ParcPlace Systems implementation of Smalltalk. However, an implementation begun from scratch could gain a better typing system. He became interested in the improvements that the team for the language Self had achieved, and envisioned the same methods used to improve Smalltalk. Urs Hölzle, who worked on the powerful Self compiler, spoke with Griswold about implementing the same type feedback in a Smalltalk compiler. Griswold, Hölzle, Lars Bak, and others formed a small company (LongView Technologies, doing business as Animorphic Systems) to re-implement Strongtalk. Work began in 1994 and they completed an implementation in 1996. The firm was bought by Sun Microsystems in 1997, and the team got focused on Java, releasing the HotSpot virtual machine, and work on Strongtalk stalled. Sun released the 1997 re-implementation of Strongtalk as open-source software under a revised BSD license, including the Strongtalk system image in 2002, and the virtual machine in 2006. Strongtalk is touted as the fastest implementation of Smalltalk. Strongtalk is available for Windows XP (other ports are in the works) and includes a basic development environment. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/MyHDL] | [TOKENS: 122]
Contents MyHDL MyHDL is a Python-based hardware description language (HDL). Features of MyHDL include: MyHDL is developed by Jan Decaluwe. Conversion examples Here, you can see some examples of conversions from MyHDL designs to VHDL and/or Verilog. A small combinatorial design The example is a small combinatorial design, more specifically the binary to Gray code converter: You can create an instance and convert to Verilog and VHDL as follows: The generated Verilog code looks as follows: The generated VHDL code looks as follows: See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/File:Xenoturbella_japonica.jpg] | [TOKENS: 104]
File:Xenoturbella japonica.jpg Summary Licensing File history Click on a date/time to view the file as it appeared at that time. File usage The following 9 pages use this file: Global file usage The following other wikis use this file: View more global usage of this file. Metadata This file contains additional information, probably added from the digital camera or scanner used to create or digitize it. If the file has been modified from its original state, some details may not fully reflect the modified file.
========================================
[SOURCE: https://en.wikipedia.org/wiki/OpenAI#cite_note-219] | [TOKENS: 8773]
Contents OpenAI OpenAI is an American artificial intelligence research organization comprising both a non-profit foundation and a controlled for-profit public benefit corporation (PBC), headquartered in San Francisco. It aims to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". OpenAI is widely recognized for its development of the GPT family of large language models, the DALL-E series of text-to-image models, and the Sora series of text-to-video models, which have influenced industry research and commercial applications. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI. The organization was founded in 2015 in Delaware but evolved a complex corporate structure. As of October 2025, following restructuring approved by California and Delaware regulators, the non-profit OpenAI Foundation holds 26% of the for-profit OpenAI Group PBC, with Microsoft holding 27% and employees/other investors holding 47%. Under its governance arrangements, the OpenAI Foundation holds the authority to appoint the board of the for-profit OpenAI Group PBC, a mechanism designed to align the entity’s strategic direction with the Foundation’s charter. Microsoft previously invested over $13 billion into OpenAI, and provides Azure cloud computing resources. In October 2025, OpenAI conducted a $6.6 billion share sale that valued the company at $500 billion. In 2023 and 2024, OpenAI faced multiple lawsuits for alleged copyright infringement against authors and media companies whose work was used to train some of OpenAI's products. In November 2023, OpenAI's board removed Sam Altman as CEO, citing a lack of confidence in him, but reinstated him five days later following a reconstruction of the board. Throughout 2024, roughly half of then-employed AI safety researchers left OpenAI, citing the company's prominent role in an industry-wide problem. Founding In December 2015, OpenAI was founded as a not for profit organization by Sam Altman, Elon Musk, Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk as the co-chairs. A total of $1 billion in capital was pledged by Sam Altman, Greg Brockman, Elon Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), and Infosys. However, the actual capital collected significantly lagged pledges. According to company disclosures, only $130 million had been received by 2019. In its founding charter, OpenAI stated an intention to collaborate openly with other institutions by making certain patents and research publicly available, but later restricted access to its most capable models, citing competitive and safety concerns. OpenAI was initially run from Brockman's living room. It was later headquartered at the Pioneer Building in the Mission District, San Francisco. According to OpenAI's charter, its founding mission is "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity." Musk and Altman stated in 2015 that they were partly motivated by concerns about AI safety and existential risk from artificial general intelligence. OpenAI stated that "it's hard to fathom how much human-level AI could benefit society", and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly". The startup also wrote that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible", and that "because of AI's surprising history, it's hard to predict when human-level AI might come within reach. When it does, it'll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest." Co-chair Sam Altman expected a decades-long project that eventually surpasses human intelligence. Brockman met with Yoshua Bengio, one of the "founding fathers" of deep learning, and drew up a list of great AI researchers. Brockman was able to hire nine of them as the first employees in December 2015. OpenAI did not pay AI researchers salaries comparable to those of Facebook or Google. It also did not pay stock options which AI researchers typically get. Nevertheless, OpenAI spent $7 million on its first 52 employees in 2016. OpenAI's potential and mission drew these researchers to the firm; a Google employee said he was willing to leave Google for OpenAI "partly because of the very strong group of people and, to a very large extent, because of its mission." OpenAI co-founder Wojciech Zaremba stated that he turned down "borderline crazy" offers of two to three times his market value to join OpenAI instead. In April 2016, OpenAI released a public beta of "OpenAI Gym", its platform for reinforcement learning research. Nvidia gifted its first DGX-1 supercomputer to OpenAI in August 2016 to help it train larger and more complex AI models with the capability of reducing processing time from six days to two hours. In December 2016, OpenAI released "Universe", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites, and other applications. Corporate structure In 2019, OpenAI transitioned from non-profit to "capped" for-profit, with the profit being capped at 100 times any investment. According to OpenAI, the capped-profit model allows OpenAI Global, LLC to legally attract investment from venture funds and, in addition, to grant employees stakes in the company. Many top researchers work for Google Brain, DeepMind, or Facebook, which offer equity that a nonprofit would be unable to match. Before the transition, OpenAI was legally required to publicly disclose the compensation of its top employees. The company then distributed equity to its employees and partnered with Microsoft, announcing an investment package of $1 billion into the company. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft. OpenAI Global, LLC then announced its intention to commercially license its technologies. It planned to spend $1 billion "within five years, and possibly much faster". Altman stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need "more capital than any non-profit has ever raised" to achieve artificial general intelligence. The nonprofit, OpenAI, Inc., is the sole controlling shareholder of OpenAI Global, LLC, which, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI, Inc.'s nonprofit charter. A majority of OpenAI, Inc.'s board is barred from having financial stakes in OpenAI Global, LLC. In addition, minority members with a stake in OpenAI Global, LLC are barred from certain votes due to conflict of interest. Some researchers have argued that OpenAI Global, LLC's switch to for-profit status is inconsistent with OpenAI's claims to be "democratizing" AI. On February 29, 2024, Elon Musk filed a lawsuit against OpenAI and CEO Sam Altman, accusing them of shifting focus from public benefit to profit maximization—a case OpenAI dismissed as "incoherent" and "frivolous," though Musk later revived legal action against Altman and others in August. On April 9, 2024, OpenAI countersued Musk in federal court, alleging that he had engaged in "bad-faith tactics" to slow the company's progress and seize its innovations for his personal benefit. OpenAI also argued that Musk had previously supported the creation of a for-profit structure and had expressed interest in controlling OpenAI himself. The countersuit seeks damages and legal measures to prevent further alleged interference. On February 10, 2025, a consortium of investors led by Elon Musk submitted a $97.4 billion unsolicited bid to buy the nonprofit that controls OpenAI, declaring willingness to match or exceed any better offer. The offer was rejected on 14 February 2025, with OpenAI stating that it was not for sale, but the offer complicated Altman's restructuring plan by suggesting a lower bar for how much the nonprofit should be valued. OpenAI, Inc. was originally designed as a nonprofit in order to ensure that AGI "benefits all of humanity" rather than "the private gain of any person". In 2019, it created OpenAI Global, LLC, a capped-profit subsidiary controlled by the nonprofit. In December 2024, OpenAI proposed a restructuring plan to convert the capped-profit into a Delaware-based public benefit corporation (PBC), and to release it from the control of the nonprofit. The nonprofit would sell its control and other assets, getting equity in return, and would use it to fund and pursue separate charitable projects, including in science and education. OpenAI's leadership described the change as necessary to secure additional investments, and claimed that the nonprofit's founding mission to ensure AGI "benefits all of humanity" would be better fulfilled. The plan has been criticized by former employees. A legal letter named "Not For Private Gain" asked the attorneys general of California and Delaware to intervene, stating that the restructuring is illegal and would remove governance safeguards from the nonprofit and the attorneys general. The letter argues that OpenAI's complex structure was deliberately designed to remain accountable to its mission, without the conflicting pressure of maximizing profits. It contends that the nonprofit is best positioned to advance its mission of ensuring AGI benefits all of humanity by continuing to control OpenAI Global, LLC, whatever the amount of equity that it could get in exchange. PBCs can choose how they balance their mission with profit-making. Controlling shareholders have a large influence on how closely a PBC sticks to its mission. On October 28, 2025, OpenAI announced that it had adopted the new PBC corporate structure after receiving approval from the attorneys general of California and Delaware. Under the new structure, OpenAI's for-profit branch became a public benefit corporation known as OpenAI Group PBC, while the non-profit was renamed to the OpenAI Foundation. The OpenAI Foundation holds a 26% stake in the PBC, while Microsoft holds a 27% stake and the remaining 47% is owned by employees and other investors. All members of the OpenAI Group PBC board of directors will be appointed by the OpenAI Foundation, which can remove them at any time. Members of the Foundation's board will also serve on the for-profit board. The new structure allows the for-profit PBC to raise investor funds like most traditional tech companies, including through an initial public offering, which Altman claimed was the most likely path forward. In January 2023, OpenAI Global, LLC was in talks for funding that would value the company at $29 billion, double its 2021 value. On January 23, 2023, Microsoft announced a new US$10 billion investment in OpenAI Global, LLC over multiple years, partially needed to use Microsoft's cloud-computing service Azure. From September to December, 2023, Microsoft rebranded all variants of its Copilot to Microsoft Copilot, and they added MS-Copilot to many installations of Windows and released Microsoft Copilot mobile apps. Following OpenAI's 2025 restructuring, Microsoft owns a 27% stake in the for-profit OpenAI Group PBC, valued at $135 billion. In a deal announced the same day, OpenAI agreed to purchase $250 billion of Azure services, with Microsoft ceding their right of first refusal over OpenAI's future cloud computing purchases. As part of the deal, OpenAI will continue to share 20% of its revenue with Microsoft until it achieves AGI, which must now be verified by an independent panel of experts. The deal also loosened restrictions on both companies working with third parties, allowing Microsoft to pursue AGI independently and allowing OpenAI to develop products with other companies. In 2017, OpenAI spent $7.9 million, a quarter of its functional expenses, on cloud computing alone. In comparison, DeepMind's total expenses in 2017 were $442 million. In the summer of 2018, training OpenAI's Dota 2 bots required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks. In October 2024, OpenAI completed a $6.6 billion capital raise with a $157 billion valuation including investments from Microsoft, Nvidia, and SoftBank. On January 21, 2025, Donald Trump announced The Stargate Project, a joint venture between OpenAI, Oracle, SoftBank and MGX to build an AI infrastructure system in conjunction with the US government. The project takes its name from OpenAI's existing "Stargate" supercomputer project and is estimated to cost $500 billion. The partners planned to fund the project over the next four years. In July, the United States Department of Defense announced that OpenAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and xAI. In the same month, the company made a deal with the UK Government to use ChatGPT and other AI tools in public services. OpenAI subsequently began a $50 million fund to support nonprofit and community organizations. In April 2025, OpenAI raised $40 billion at a $300 billion post-money valuation, which was the highest-value private technology deal in history. The financing round was led by SoftBank, with other participants including Microsoft, Coatue, Altimeter and Thrive. In July 2025, the company reported annualized revenue of $12 billion. This was an increase from $3.7 billion in 2024, which was driven by ChatGPT subscriptions, which reached 20 million paid subscribers by April 2025, up from 15.5 million at the end of 2024, alongside a rapidly expanding enterprise customer base that grew to five million business users. The company’s cash burn remains high because of the intensive computational costs required to train and operate large language models. It projects an $8 billion operating loss in 2025. OpenAI reports revised long-term spending projections totaling approximately $115 billion through 2029, with annual expenditures projected to escalate significantly, reaching $17 billion in 2026, $35 billion in 2027, and $45 billion in 2028. These expenditures are primarily allocated toward expanding compute infrastructure, developing proprietary AI chips, constructing data centers, and funding intensive model training programs, with more than half of the spending through the end of the decade expected to support research-intensive compute for model training and development. The company's financial strategy prioritizes market expansion and technological advancement over near-term profitability, with OpenAI targeting cash-flow-positive operations by 2029 and projecting revenue of approximately $200 billion by 2030. This aggressive spending trajectory underscores both the enormous capital requirements of scaling cutting-edge AI technology and OpenAI's commitment to maintaining its position as a leader in the artificial intelligence industry. In October 2025, OpenAI completed an employee share sale of up to $10 billion to existing investors which valued the company at $500 billion. The deal values OpenAI as the most valuable privately owned company in the world—surpassing SpaceX as the world's most valuable private company. On November 17, 2023, Sam Altman was removed as CEO when its board of directors (composed of Helen Toner, Ilya Sutskever, Adam D'Angelo and Tasha McCauley) cited a lack of confidence in him. Chief Technology Officer Mira Murati took over as interim CEO. Greg Brockman, the president of OpenAI, was also removed as chairman of the board and resigned from the company's presidency shortly thereafter. Three senior OpenAI researchers subsequently resigned: director of research and GPT-4 lead Jakub Pachocki, head of AI risk Aleksander Mądry, and researcher Szymon Sidor. On November 18, 2023, there were reportedly talks of Altman returning as CEO amid pressure placed upon the board by investors such as Microsoft and Thrive Capital, who objected to Altman's departure. Although Altman himself spoke in favor of returning to OpenAI, he has since stated that he considered starting a new company and bringing former OpenAI employees with him if talks to reinstate him didn't work out. The board members agreed "in principle" to resign if Altman returned. On November 19, 2023, negotiations with Altman to return failed and Murati was replaced by Emmett Shear as interim CEO. The board initially contacted Anthropic CEO Dario Amodei (a former OpenAI executive) about replacing Altman, and proposed a merger of the two companies, but both offers were declined. On November 20, 2023, Microsoft CEO Satya Nadella announced Altman and Brockman would be joining Microsoft to lead a new advanced AI research team, but added that they were still committed to OpenAI despite recent events. Before the partnership with Microsoft was finalized, Altman gave the board another opportunity to negotiate with him. About 738 of OpenAI's 770 employees, including Murati and Sutskever, signed an open letter stating they would quit their jobs and join Microsoft if the board did not rehire Altman and then resign. This prompted OpenAI investors to consider legal action against the board as well. In response, OpenAI management sent an internal memo to employees stating that negotiations with Altman and the board had resumed and would take some time. On November 21, 2023, after continued negotiations, Altman and Brockman returned to the company in their prior roles along with a reconstructed board made up of new members Bret Taylor (as chairman) and Lawrence Summers, with D'Angelo remaining. According to subsequent reporting, shortly before Altman’s firing, some employees raised concerns to the board about how he had handled the safety implications of a recent internal AI capability discovery. On November 29, 2023, OpenAI announced that an anonymous Microsoft employee had joined the board as a non-voting member to observe the company's operations; Microsoft resigned from the board in July 2024. In February 2024, the Securities and Exchange Commission subpoenaed OpenAI's internal communication to determine if Altman's alleged lack of candor misled investors. In 2024, following the temporary removal of Sam Altman and his return, many employees gradually left OpenAI, including most of the original leadership team and a significant number of AI safety researchers. In August 2023, it was announced that OpenAI had acquired the New York-based start-up Global Illumination, a company that deploys AI to develop digital infrastructure and creative tools. In June 2024, OpenAI acquired Multi, a startup focused on remote collaboration. In March 2025, OpenAI reached a deal with CoreWeave to acquire $350 million worth of CoreWeave shares and access to AI infrastructure, in return for $11.9 billion paid over five years. Microsoft was already CoreWeave's biggest customer in 2024. Alongside their other business dealings, OpenAI and Microsoft were renegotiating the terms of their partnership to facilitate a potential future initial public offering by OpenAI, while ensuring Microsoft's continued access to advanced AI models. On May 21, OpenAI announced the $6.5 billion acquisition of io, an AI hardware start-up founded by former Apple designer Jony Ive in 2024. In September 2025, OpenAI agreed to acquire the product testing startup Statsig for $1.1 billion in an all-stock deal and appointed Statsig's founding CEO Vijaye Raji as OpenAI's chief technology officer of applications. The company also announced development of an AI-driven hiring service designed to rival LinkedIn. OpenAI acquired personal finance app Roi in October 2025. In October 2025, OpenAI acquired Software Applications Incorporated, the developer of Sky, a macOS-based natural language interface designed to operate across desktop applications. The Sky team joined OpenAI, and the company announced plans to integrate Sky’s capabilities into ChatGPT. In December 2025, it was announced OpenAI had agreed to acquire Neptune, an AI tooling startup that helps companies track and manage model training, for an undisclosed amount. In January 2026, it was announced OpenAI had acquired healthcare technology startup Torch for approximately $60 million. The acquisition followed the launch of OpenAI’s ChatGPT Health product and was intended to strengthen the company’s medical data and healthcare artificial intelligence capabilities. OpenAI has been criticized for outsourcing the annotation of data sets to Sama, a company based in San Francisco that employed workers in Kenya. These annotations were used to train an AI model to detect toxicity, which could then be used to moderate toxic content, notably from ChatGPT's training data and outputs. However, these pieces of text usually contained detailed descriptions of various types of violence, including sexual violence. The investigation uncovered that OpenAI began sending snippets of data to Sama as early as November 2021. The four Sama employees interviewed by Time described themselves as mentally scarred. OpenAI paid Sama $12.50 per hour of work, and Sama was redistributing the equivalent of between $1.32 and $2.00 per hour post-tax to its annotators. Sama's spokesperson said that the $12.50 was also covering other implicit costs, among which were infrastructure expenses, quality assurance and management. In 2024, OpenAI began collaborating with Broadcom to design a custom AI chip capable of both training and inference, targeted for mass production in 2026 and to be manufactured by TSMC on a 3 nm process node. This initiative intended to reduce OpenAI's dependence on Nvidia GPUs, which are costly and face high demand in the market. In January 2024, Arizona State University purchased ChatGPT Enterprise in OpenAI's first deal with a university. In June 2024, Apple Inc. signed a contract with OpenAI to integrate ChatGPT features into its products as part of its new Apple Intelligence initiative. In June 2025, OpenAI began renting Google Cloud's Tensor Processing Units (TPUs) to support ChatGPT and related services, marking its first meaningful use of non‑Nvidia AI chips. In September 2025, it was revealed that OpenAI signed a contract with Oracle to purchase $300 billion in computing power over the next five years. In September 2025, OpenAI and NVIDIA announced a memorandum of understanding that included a potential deployment of at least 10 gigawatts of NVIDIA systems and a $100 billion investment from NVIDIA in OpenAI. OpenAI expected the negotiations to be completed within weeks. As of January 2026, this has not been realized, and the two sides are rethinking the future of their partnership. In October 2025, OpenAI announced a multi-billion dollar deal with AMD. OpenAI committed to purchasing six gigawatts worth of AMD chips, starting with the MI450. OpenAI will have the option to buy up to 160 million shares of AMD, about 10% of the company, depending on development, performance and share price targets. In December 2025, Disney said it would make a $1 billion investment in OpenAI, and signed a three-year licensing deal that will let users generate videos using Sora—OpenAI's short-form AI video platform. More than 200 Disney, Marvel, Star Wars and Pixar characters will be available to OpenAI users. In early 2026, Amazon entered advanced discussions to invest up to $50 billion in OpenAI as part of a potential artificial intelligence partnership. Under the proposed agreement, OpenAI’s models could be integrated into Amazon’s digital assistant Alexa and other internal projects. OpenAI provides LLMs to the Artificial Intelligence Cyber Challenge and to the Advanced Research Projects Agency for Health. In October 2024, The Intercept revealed that OpenAI's tools are considered "essential" for AFRICOM's mission and included in an "Exception to Fair Opportunity" contractual agreement between the United States Department of Defense and Microsoft. In December 2024, OpenAI said it would partner with defense-tech company Anduril to build drone defense technologies for the United States and its allies. In 2025, OpenAI's Chief Product Officer, Kevin Weil, was commissioned lieutenant colonel in the U.S. Army to join Detachment 201 as senior advisor. In June 2025, the U.S. Department of Defense awarded OpenAI a $200 million one-year contract to develop AI tools for military and national security applications. OpenAI announced a new program, OpenAI for Government, to give federal, state, and local governments access to its models, including ChatGPT. Services In February 2019, GPT-2 was announced, which gained attention for its ability to generate human-like text. In 2020, OpenAI announced GPT-3, a language model trained on large internet datasets. GPT-3 is aimed at natural language answering questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named the API, would form the heart of its first commercial product. Eleven employees left OpenAI, mostly between December 2020 and January 2021, in order to establish Anthropic. In 2021, OpenAI introduced DALL-E, a specialized deep learning model adept at generating complex digital images from textual descriptions, utilizing a variant of the GPT-3 architecture. In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days. According to anonymous sources cited by Reuters in December 2022, OpenAI Global, LLC was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024. After ChatGPT was launched, Google announced a similar chatbot, Bard, amid internal concerns that ChatGPT could threaten Google’s position as a primary source of online information. On February 7, 2023, Microsoft announced that it was building AI technology based on the same foundation as ChatGPT into Microsoft Bing, Edge, Microsoft 365 and other products. On March 14, 2023, OpenAI released GPT-4, both as an API (with a waitlist) and as a feature of ChatGPT Plus. On November 6, 2023, OpenAI launched GPTs, allowing individuals to create customized versions of ChatGPT for specific purposes, further expanding the possibilities of AI applications across various industries. On November 14, 2023, OpenAI announced they temporarily suspended new sign-ups for ChatGPT Plus due to high demand. Access for newer subscribers re-opened a month later on December 13. In December 2024, the company launched the Sora model. It also launched OpenAI o1, an early reasoning model that was internally codenamed strawberry. Additionally, ChatGPT Pro—a $200/month subscription service offering unlimited o1 access and enhanced voice features—was introduced, and preliminary benchmark results for the upcoming OpenAI o3 models were shared. On January 23, 2025, OpenAI released Operator, an AI agent and web automation tool for accessing websites to execute goals defined by users. The feature was only available to Pro users in the United States. OpenAI released deep research agent, nine days later. It scored a 27% accuracy on the benchmark Humanity's Last Exam (HLE). Altman later stated GPT-4.5 would be the last model without full chain-of-thought reasoning. In July 2025, reports indicated that AI models by both OpenAI and Google DeepMind solved mathematics problems at the level of top-performing students in the International Mathematical Olympiad. OpenAI's large language model was able to achieve gold medal-level performance, reflecting significant progress in AI's reasoning abilities. On October 6, 2025, OpenAI unveiled its Agent Builder platform during the company's DevDay event. The platform includes a visual drag-and-drop interface that lets developers and businesses design, test, and deploy agentic workflows with limited coding. On October 21, 2025, OpenAI introduced ChatGPT Atlas, a browser integrating the ChatGPT assistant directly into web navigation, to compete with existing browsers such as Google Chrome and Apple Safari. On December 11, 2025, OpenAI announced GPT-5.2. This model will be better at creating spreadsheets, building presentations, perceiving images, writing code and understanding long context. On January 27, 2026, OpenAI introduced Prism, a LaTeX-native workspace meant to assist scientists to help with research and writing. The platform utilizes GPT-5.2 as a backend to automate the process of drafting for scientific papers, including features for managing citations, complex equation formatting, and real-time collaborative editing. In March 2023, the company was criticized for disclosing particularly few technical details about products like GPT-4, contradicting its initial commitment to openness and making it harder for independent researchers to replicate its work and develop safeguards. OpenAI cited competitiveness and safety concerns to justify this repudiation. OpenAI's former chief scientist Ilya Sutskever argued in 2023 that open-sourcing increasingly capable models was increasingly risky, and that the safety reasons for not open-sourcing the most potent AI models would become "obvious" in a few years. In September 2025, OpenAI published a study on how people use ChatGPT for everyday tasks. The study found that "non-work tasks" (according to an LLM-based classifier) account for more than 72 percent of all ChatGPT usage, with a minority of overall usage related to business productivity. In July 2023, OpenAI launched the superalignment project, aiming within four years to determine how to align future superintelligent systems. OpenAI promised to dedicate 20% of its computing resources to the project, although the team denied receiving anything close to 20%. OpenAI ended the project in May 2024 after its co-leaders Ilya Sutskever and Jan Leike left the company. In August 2025, OpenAI was criticized after thousands of private ChatGPT conversations were inadvertently exposed to public search engines like Google due to an experimental "share with search engines" feature. The opt-in toggle, intended to allow users to make specific chats discoverable, resulted in some discussions including personal details such as names, locations, and intimate topics appearing in search results when users accidentally enabled it while sharing links. OpenAI announced the feature's permanent removal on August 1, 2025, and the company began coordinating with search providers to remove the exposed content, emphasizing that it was not a security breach but a design flaw that heightened privacy risks. CEO Sam Altman acknowledged the issue in a podcast, noting users often treat ChatGPT as a confidant for deeply personal matters, which amplified concerns about AI handling sensitive data. Management In 2018, Musk resigned from his Board of Directors seat, citing "a potential future conflict [of interest]" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars. OpenAI stated that Musk's financial contributions were below $45 million. On March 3, 2023, Reid Hoffman resigned from his board seat, citing a desire to avoid conflicts of interest with his investments in AI companies via Greylock Partners, and his co-founding of the AI startup Inflection AI. Hoffman remained on the board of Microsoft, a major investor in OpenAI. In May 2024, Chief Scientist Ilya Sutskever resigned and was succeeded by Jakub Pachocki. Co-leader Jan Leike also departed amid concerns over safety and trust. OpenAI then signed deals with Reddit, News Corp, Axios, and Vox Media. Paul Nakasone then joined the board of OpenAI. In August 2024, cofounder John Schulman left OpenAI to join Anthropic, and OpenAI's president Greg Brockman took extended leave until November. In September 2024, CTO Mira Murati left the company. In November 2025, Lawrence Summers resigned from the board of directors. Governance and legal issues In May 2023, Sam Altman, Greg Brockman and Ilya Sutskever posted recommendations for the governance of superintelligence. They stated that superintelligence could happen within the next 10 years, allowing a "dramatically more prosperous future" and that "given the possibility of existential risk, we can't just be reactive". They proposed creating an international watchdog organization similar to IAEA to oversee AI systems above a certain capability threshold, suggesting that relatively weak AI systems on the other side should not be overly regulated. They also called for more technical safety research for superintelligences, and asked for more coordination, for example through governments launching a joint project which "many current efforts become part of". In July 2023, the FTC issued a civil investigative demand to OpenAI to investigate whether the company's data security and privacy practices to develop ChatGPT were unfair or harmed consumers (including by reputational harm) in violation of Section 5 of the Federal Trade Commission Act of 1914. These are typically preliminary investigative matters and are nonpublic, but the FTC's document was leaked. In July 2023, the FTC launched an investigation into OpenAI over allegations that the company scraped public data and published false and defamatory information. They asked OpenAI for comprehensive information about its technology and privacy safeguards, as well as any steps taken to prevent the recurrence of situations in which its chatbot generated false and derogatory content about people. The agency also raised concerns about ‘circular’ spending arrangements—for example, Microsoft extending Azure credits to OpenAI while both companies shared engineering talent—and warned that such structures could negatively affect the public. In September 2024, OpenAI's global affairs chief endorsed the UK's "smart" AI regulation during testimony to a House of Lords committee. In February 2025, OpenAI CEO Sam Altman stated that the company is interested in collaborating with the People's Republic of China, despite regulatory restrictions imposed by the U.S. government. This shift comes in response to the growing influence of the Chinese artificial intelligence company DeepSeek, which has disrupted the AI market with open models, including DeepSeek V3 and DeepSeek R1. Following DeepSeek's market emergence, OpenAI enhanced security protocols to protect proprietary development techniques from industrial espionage. Some industry observers noted similarities between DeepSeek's model distillation approach and OpenAI's methodology, though no formal intellectual property claim was filed. According to Oliver Roberts, in March 2025, the United States had 781 state AI bills or laws. OpenAI advocated for preempting state AI laws with federal laws. According to Scott Kohler, OpenAI has opposed California's AI legislation and suggested that the state bill encroaches on a more competent federal government. Public Citizen opposed a federal preemption on AI and pointed to OpenAI's growth and valuation as evidence that existing state laws have not hampered innovation. Before May 2024, OpenAI required departing employees to sign a lifelong non-disparagement agreement forbidding them from criticizing OpenAI and acknowledging the existence of the agreement. Daniel Kokotajlo, a former employee, publicly stated that he forfeited his vested equity in OpenAI in order to leave without signing the agreement. Sam Altman stated that he was unaware of the equity cancellation provision, and that OpenAI never enforced it to cancel any employee's vested equity. However, leaked documents and emails refute this claim. On May 23, 2024, OpenAI sent a memo releasing former employees from the agreement. OpenAI was sued for copyright infringement by authors Sarah Silverman, Matthew Butterick, Paul Tremblay and Mona Awad in July 2023. In September 2023, 17 authors, including George R. R. Martin, John Grisham, Jodi Picoult and Jonathan Franzen, joined the Authors Guild in filing a class action lawsuit against OpenAI, alleging that the company's technology was illegally using their copyrighted work. The New York Times also sued the company in late December 2023. In May 2024 it was revealed that OpenAI had destroyed its Books1 and Books2 training datasets, which were used in the training of GPT-3, and which the Authors Guild believed to have contained over 100,000 copyrighted books. In 2021, OpenAI developed a speech recognition tool called Whisper. OpenAI used it to transcribe more than one million hours of YouTube videos into text for training GPT-4. The automated transcription of YouTube videos raised concerns within OpenAI employees regarding potential violations of YouTube's terms of service, which prohibit the use of videos for applications independent of the platform, as well as any type of automated access to its videos. Despite these concerns, the project proceeded with notable involvement from OpenAI's president, Greg Brockman. The resulting dataset proved instrumental in training GPT-4. In February 2024, The Intercept as well as Raw Story and Alternate Media Inc. filed lawsuit against OpenAI on copyright litigation ground. The lawsuit is said to have charted a new legal strategy for digital-only publishers to sue OpenAI. On April 30, 2024, eight newspapers filed a lawsuit in the Southern District of New York against OpenAI and Microsoft, claiming illegal harvesting of their copyrighted articles. The suing publications included The Mercury News, The Denver Post, The Orange County Register, St. Paul Pioneer Press, Chicago Tribune, Orlando Sentinel, Sun Sentinel, and New York Daily News. In June 2023, a lawsuit claimed that OpenAI scraped 300 billion words online without consent and without registering as a data broker. It was filed in San Francisco, California, by sixteen anonymous plaintiffs. They also claimed that OpenAI and its partner as well as customer Microsoft continued to unlawfully collect and use personal data from millions of consumers worldwide to train artificial intelligence models. On May 22, 2024, OpenAI entered into an agreement with News Corp to integrate news content from The Wall Street Journal, the New York Post, The Times, and The Sunday Times into its AI platform. Meanwhile, other publications like The New York Times chose to sue OpenAI and Microsoft for copyright infringement over the use of their content to train AI models. In November 2024, a coalition of Canadian news outlets, including the Toronto Star, Metroland Media, Postmedia, The Globe and Mail, The Canadian Press and CBC, sued OpenAI for using their news articles to train its software without permission. In October 2024 during a New York Times interview, Suchir Balaji accused OpenAI of violating copyright law in developing its commercial LLMs which he had helped engineer. He was a likely witness in a major copyright trial against the AI company, and was one of several of its current or former employees named in court filings as potentially having documents relevant to the case. On November 26, 2024, Balaji died by suicide. His death prompted the circulation of conspiracy theories alleging that he had been deliberately silenced. California Congressman Ro Khanna endorsed calls for an investigation. On April 24, 2025, Ziff Davis sued OpenAI in Delaware federal court for copyright infringement. Ziff Davis is known for publications such as ZDNet, PCMag, CNET, IGN and Lifehacker. In April 2023, the EU's European Data Protection Board (EDPB) formed a dedicated task force on ChatGPT "to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities" based on the "enforcement action undertaken by the Italian data protection authority against OpenAI about the ChatGPT service". In late April 2024 NOYB filed a complaint with the Austrian Datenschutzbehörde against OpenAI for violating the European General Data Protection Regulation. A text created with ChatGPT gave a false date of birth for a living person without giving the individual the option to see the personal data used in the process. A request to correct the mistake was denied. Additionally, neither the recipients of ChatGPT's work nor the sources used, could be made available, OpenAI claimed. OpenAI was criticized for lifting its ban on using ChatGPT for "military and warfare". Up until January 10, 2024, its "usage policies" included a ban on "activity that has high risk of physical harm, including", specifically, "weapons development" and "military and warfare". Its new policies prohibit "[using] our service to harm yourself or others" and to "develop or use weapons". In August 2025, the parents of a 16-year-old boy who died by suicide filed a wrongful death lawsuit against OpenAI (and CEO Sam Altman), alleging that months of conversations with ChatGPT about mental health and methods of self-harm contributed to their son's death and that safeguards were inadequate for minors. OpenAI expressed condolences and said it was strengthening protections (including updated crisis response behavior and parental controls). Coverage described it as a first-of-its-kind wrongful death case targeting the company's chatbot. The complaint was filed in California state court in San Francisco. In November 2025, the Social Media Victims Law Center and Tech Justice Law Project filed seven lawsuits against OpenAI, of which four lawsuits alleged wrongful death. The suits were filed on behalf of Zane Shamblin, 23, of Texas; Amaurie Lacey, 17, of Georgia; Joshua Enneking, 26, of Florida; and Joe Ceccanti, 48, of Oregon, who each committed suicide after prolonged ChatGPT usage. In December 2025, Stein-Erik Soelberg, who was 56 years old at the time, allegedly murdered his mother Suzanne Adams. In the months prior the paranoid, delusional man often discussed his ideas with ChatGPT. Adam's estate then sued OpenAI claiming that the company shared responsibility due to the risk of chatbot psychosis despite the fact that chatbot psychosis is not a real medical diagnosis. OpenAI responded saying they will make ChatGPT safer for users disconnected from reality. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Mars#cite_note-goodman97-149] | [TOKENS: 11899]
Contents Mars Mars is the fourth planet from the Sun. It is also known as the "Red Planet", for its orange-red appearance. Mars is a desert-like rocky planet with a tenuous atmosphere that is primarily carbon dioxide (CO2). At the average surface level the atmospheric pressure is a few thousandths of Earth's, atmospheric temperature ranges from −153 to 20 °C (−243 to 68 °F), and cosmic radiation is high. Mars retains some water, in the ground as well as thinly in the atmosphere, forming cirrus clouds, fog, frost, larger polar regions of permafrost and ice caps (with seasonal CO2 snow), but no bodies of liquid surface water. Its surface gravity is roughly a third of Earth's or double that of the Moon. Its diameter, 6,779 km (4,212 mi), is about half the Earth's, or twice the Moon's, and its surface area is the size of all the dry land of Earth. Fine dust is prevalent across the surface and the atmosphere, being picked up and spread at the low Martian gravity even by the weak wind of the tenuous atmosphere. The terrain of Mars roughly follows a north-south divide, the Martian dichotomy, with the northern hemisphere mainly consisting of relatively flat, low lying plains, and the southern hemisphere of cratered highlands. Geologically, the planet is fairly active with marsquakes trembling underneath the ground, but also hosts many enormous volcanoes that are extinct (the tallest is Olympus Mons, 21.9 km or 13.6 mi tall), as well as one of the largest canyons in the Solar System (Valles Marineris, 4,000 km or 2,500 mi long). Mars has two natural satellites that are small and irregular in shape: Phobos and Deimos. With a significant axial tilt of 25 degrees, Mars experiences seasons, like Earth (which has an axial tilt of 23.5 degrees). A Martian solar year is equal to 1.88 Earth years (687 Earth days), a Martian solar day (sol) is equal to 24.6 hours. Mars formed along with the other planets approximately 4.5 billion years ago. During the martian Noachian period (4.5 to 3.5 billion years ago), its surface was marked by meteor impacts, valley formation, erosion, the possible presence of water oceans and the loss of its magnetosphere. The Hesperian period (beginning 3.5 billion years ago and ending 3.3–2.9 billion years ago) was dominated by widespread volcanic activity and flooding that carved immense outflow channels. The Amazonian period, which continues to the present, is the currently dominating and remaining influence on geological processes. Because of Mars's geological history, the possibility of past or present life on Mars remains an area of active scientific investigation, with some possible traces needing further examination. Being visible with the naked eye in Earth's sky as a red wandering star, Mars has been observed throughout history, acquiring diverse associations in different cultures. In 1963 the first flight to Mars took place with Mars 1, but communication was lost en route. The first successful flyby exploration of Mars was conducted in 1965 with Mariner 4. In 1971 Mariner 9 entered orbit around Mars, being the first spacecraft to orbit any body other than the Moon, Sun or Earth; following in the same year were the first uncontrolled impact (Mars 2) and first successful landing (Mars 3) on Mars. Probes have been active on Mars continuously since 1997. At times, more than ten probes have simultaneously operated in orbit or on the surface, more than at any other planet beyond Earth. Mars is an often proposed target for future crewed exploration missions, though no such mission is currently planned. Natural history Scientists have theorized that during the Solar System's formation, Mars was created as the result of a random process of run-away accretion of material from the protoplanetary disk that orbited the Sun. Mars has many distinctive chemical features caused by its position in the Solar System. Elements with comparatively low boiling points, such as chlorine, phosphorus, and sulfur, are much more common on Mars than on Earth; these elements were probably pushed outward by the young Sun's energetic solar wind. After the formation of the planets, the inner Solar System may have been subjected to the so-called Late Heavy Bombardment. About 60% of the surface of Mars shows a record of impacts from that era, whereas much of the remaining surface is probably underlain by immense impact basins caused by those events. However, more recent modeling has disputed the existence of the Late Heavy Bombardment. There is evidence of an enormous impact basin in the Northern Hemisphere of Mars, spanning 10,600 by 8,500 kilometres (6,600 by 5,300 mi), or roughly four times the size of the Moon's South Pole–Aitken basin, which would be the largest impact basin yet discovered if confirmed. It has been hypothesized that the basin was formed when Mars was struck by a Pluto-sized body about four billion years ago. The event, thought to be the cause of the Martian hemispheric dichotomy, created the smooth Borealis basin that covers 40% of the planet. A 2023 study shows evidence, based on the orbital inclination of Deimos (a small moon of Mars), that Mars may once have had a ring system 3.5 billion years to 4 billion years ago. This ring system may have been formed from a moon, 20 times more massive than Phobos, orbiting Mars billions of years ago; and Phobos would be a remnant of that ring. Epochs: The geological history of Mars can be split into many periods, but the following are the three primary periods: Geological activity is still taking place on Mars. The Athabasca Valles is home to sheet-like lava flows created about 200 million years ago. Water flows in the grabens called the Cerberus Fossae occurred less than 20 million years ago, indicating equally recent volcanic intrusions. The Mars Reconnaissance Orbiter has captured images of avalanches. Physical characteristics Mars is approximately half the diameter of Earth or twice that of the Moon, with a surface area only slightly less than the total area of Earth's dry land. Mars is less dense than Earth, having about 15% of Earth's volume and 11% of Earth's mass, resulting in about 38% of Earth's surface gravity. Mars is the only presently known example of a desert planet, a rocky planet with a surface akin to that of Earth's deserts. The red-orange appearance of the Martian surface is caused by iron(III) oxide (nanophase Fe2O3) and the iron(III) oxide-hydroxide mineral goethite. It can look like butterscotch; other common surface colors include golden, brown, tan, and greenish, depending on the minerals present. Like Earth, Mars is differentiated into a dense metallic core overlaid by less dense rocky layers. The outermost layer is the crust, which is on average about 42–56 kilometres (26–35 mi) thick, with a minimum thickness of 6 kilometres (3.7 mi) in Isidis Planitia, and a maximum thickness of 117 kilometres (73 mi) in the southern Tharsis plateau. For comparison, Earth's crust averages 27.3 ± 4.8 km in thickness. The most abundant elements in the Martian crust are silicon, oxygen, iron, magnesium, aluminum, calcium, and potassium. Mars is confirmed to be seismically active; in 2019, it was reported that InSight had detected and recorded over 450 marsquakes and related events. Beneath the crust is a silicate mantle responsible for many of the tectonic and volcanic features on the planet's surface. The upper Martian mantle is a low-velocity zone, where the velocity of seismic waves is lower than surrounding depth intervals. The mantle appears to be rigid down to the depth of about 250 km, giving Mars a very thick lithosphere compared to Earth. Below this the mantle gradually becomes more ductile, and the seismic wave velocity starts to grow again. The Martian mantle does not appear to have a thermally insulating layer analogous to Earth's lower mantle; instead, below 1050 km in depth, it becomes mineralogically similar to Earth's transition zone. At the bottom of the mantle lies a basal liquid silicate layer approximately 150–180 km thick. The Martian mantle appears to be highly heterogenous, with dense fragments up to 4 km across, likely injected deep into the planet by colossal impacts ~4.5 billion years ago; high-frequency waves from eight marsquakes slowed as they passed these localized regions, and modeling indicates the heterogeneities are compositionally distinct debris preserved because Mars lacks plate tectonics and has a sluggishly convecting interior that prevents complete homogenization. Mars's iron and nickel core is at least partially molten, and may have a solid inner core. It is around half of Mars's radius, approximately 1650–1675 km, and is enriched in light elements such as sulfur, oxygen, carbon, and hydrogen. The temperature of the core is estimated to be 2000–2400 K, compared to 5400–6230 K for Earth's solid inner core. In 2025, based on data from the InSight lander, a group of researchers reported the detection of a solid inner core 613 kilometres (381 mi) ± 67 kilometres (42 mi) in radius. Mars is a terrestrial planet with a surface that consists of minerals containing silicon and oxygen, metals, and other elements that typically make up rock. The Martian surface is primarily composed of tholeiitic basalt, although parts are more silica-rich than typical basalt and may be similar to andesitic rocks on Earth, or silica glass. Regions of low albedo suggest concentrations of plagioclase feldspar, with northern low albedo regions displaying higher than normal concentrations of sheet silicates and high-silicon glass. Parts of the southern highlands include detectable amounts of high-calcium pyroxenes. Localized concentrations of hematite and olivine have been found. Much of the surface is deeply covered by finely grained iron(III) oxide dust. The Phoenix lander returned data showing Martian soil to be slightly alkaline and containing elements such as magnesium, sodium, potassium and chlorine. These nutrients are found in soils on Earth, and are necessary for plant growth. Experiments performed by the lander showed that the Martian soil has a basic pH of 7.7, and contains 0.6% perchlorate by weight, concentrations that are toxic to humans. Streaks are common across Mars and new ones appear frequently on steep slopes of craters, troughs, and valleys. The streaks are dark at first and get lighter with age. The streaks can start in a tiny area, then spread out for hundreds of metres. They have been seen to follow the edges of boulders and other obstacles in their path. The commonly accepted hypotheses include that they are dark underlying layers of soil revealed after avalanches of bright dust or dust devils. Several other explanations have been put forward, including those that involve water or even the growth of organisms. Environmental radiation levels on the surface are on average 0.64 millisieverts of radiation per day, and significantly less than the radiation of 1.84 millisieverts per day or 22 millirads per day during the flight to and from Mars. For comparison the radiation levels in low Earth orbit, where Earth's space stations orbit, are around 0.5 millisieverts of radiation per day. Hellas Planitia has the lowest surface radiation at about 0.342 millisieverts per day, featuring lava tubes southwest of Hadriacus Mons with potentially levels as low as 0.064 millisieverts per day, comparable to radiation levels during flights on Earth. Although Mars has no evidence of a structured global magnetic field, observations show that parts of the planet's crust have been magnetized, suggesting that alternating polarity reversals of its dipole field have occurred in the past. This paleomagnetism of magnetically susceptible minerals is similar to the alternating bands found on Earth's ocean floors. One hypothesis, published in 1999 and re-examined in October 2005 (with the help of the Mars Global Surveyor), is that these bands suggest plate tectonic activity on Mars four billion years ago, before the planetary dynamo ceased to function and the planet's magnetic field faded. Geography and features Although better remembered for mapping the Moon, Johann Heinrich von Mädler and Wilhelm Beer were the first areographers. They began by establishing that most of Mars's surface features were permanent and by more precisely determining the planet's rotation period. In 1840, Mädler combined ten years of observations and drew the first map of Mars. Features on Mars are named from a variety of sources. Albedo features are named for classical mythology. Craters larger than roughly 50 km are named for deceased scientists and writers and others who have contributed to the study of Mars. Smaller craters are named for towns and villages of the world with populations of less than 100,000. Large valleys are named for the word "Mars" or "star" in various languages; smaller valleys are named for rivers. Large albedo features retain many of the older names but are often updated to reflect new knowledge of the nature of the features. For example, Nix Olympica (the snows of Olympus) has become Olympus Mons (Mount Olympus). The surface of Mars as seen from Earth is divided into two kinds of areas, with differing albedo. The paler plains covered with dust and sand rich in reddish iron oxides were once thought of as Martian "continents" and given names like Arabia Terra (land of Arabia) or Amazonis Planitia (Amazonian plain). The dark features were thought to be seas, hence their names Mare Erythraeum, Mare Sirenum and Aurorae Sinus. The largest dark feature seen from Earth is Syrtis Major Planum. The permanent northern polar ice cap is named Planum Boreum. The southern cap is called Planum Australe. Mars's equator is defined by its rotation, but the location of its Prime Meridian was specified, as was Earth's (at Greenwich), by choice of an arbitrary point; Mädler and Beer selected a line for their first maps of Mars in 1830. After the spacecraft Mariner 9 provided extensive imagery of Mars in 1972, a small crater (later called Airy-0), located in the Sinus Meridiani ("Middle Bay" or "Meridian Bay"), was chosen by Merton E. Davies, Harold Masursky, and Gérard de Vaucouleurs for the definition of 0.0° longitude to coincide with the original selection. Because Mars has no oceans, and hence no "sea level", a zero-elevation surface had to be selected as a reference level; this is called the areoid of Mars, analogous to the terrestrial geoid. Zero altitude was defined by the height at which there is 610.5 Pa (6.105 mbar) of atmospheric pressure. This pressure corresponds to the triple point of water, and it is about 0.6% of the sea level surface pressure on Earth (0.006 atm). For mapping purposes, the United States Geological Survey divides the surface of Mars into thirty cartographic quadrangles, each named for a classical albedo feature it contains. In April 2023, The New York Times reported an updated global map of Mars based on images from the Hope spacecraft. A related, but much more detailed, global Mars map was released by NASA on 16 April 2023. The vast upland region Tharsis contains several massive volcanoes, which include the shield volcano Olympus Mons. The edifice is over 600 km (370 mi) wide. Because the mountain is so large, with complex structure at its edges, giving a definite height to it is difficult. Its local relief, from the foot of the cliffs which form its northwest margin to its peak, is over 21 km (13 mi), a little over twice the height of Mauna Kea as measured from its base on the ocean floor. The total elevation change from the plains of Amazonis Planitia, over 1,000 km (620 mi) to the northwest, to the summit approaches 26 km (16 mi), roughly three times the height of Mount Everest, which in comparison stands at just over 8.8 kilometres (5.5 mi). Consequently, Olympus Mons is either the tallest or second-tallest mountain in the Solar System; the only known mountain which might be taller is the Rheasilvia peak on the asteroid Vesta, at 20–25 km (12–16 mi). The dichotomy of Martian topography is striking: northern plains flattened by lava flows contrast with the southern highlands, pitted and cratered by ancient impacts. It is possible that, four billion years ago, the Northern Hemisphere of Mars was struck by an object one-tenth to two-thirds the size of Earth's Moon. If this is the case, the Northern Hemisphere of Mars would be the site of an impact crater 10,600 by 8,500 kilometres (6,600 by 5,300 mi) in size, or roughly the area of Europe, Asia, and Australia combined, surpassing Utopia Planitia and the Moon's South Pole–Aitken basin as the largest impact crater in the Solar System. Mars is scarred by 43,000 impact craters with a diameter of 5 kilometres (3.1 mi) or greater. The largest exposed crater is Hellas, which is 2,300 kilometres (1,400 mi) wide and 7,000 metres (23,000 ft) deep, and is a light albedo feature clearly visible from Earth. There are other notable impact features, such as Argyre, which is around 1,800 kilometres (1,100 mi) in diameter, and Isidis, which is around 1,500 kilometres (930 mi) in diameter. Due to the smaller mass and size of Mars, the probability of an object colliding with the planet is about half that of Earth. Mars is located closer to the asteroid belt, so it has an increased chance of being struck by materials from that source. Mars is more likely to be struck by short-period comets, i.e., those that lie within the orbit of Jupiter. Martian craters can[discuss] have a morphology that suggests the ground became wet after the meteor impact. The large canyon, Valles Marineris (Latin for 'Mariner Valleys, also known as Agathodaemon in the old canal maps), has a length of 4,000 kilometres (2,500 mi) and a depth of up to 7 kilometres (4.3 mi). The length of Valles Marineris is equivalent to the length of Europe and extends across one-fifth the circumference of Mars. By comparison, the Grand Canyon on Earth is only 446 kilometres (277 mi) long and nearly 2 kilometres (1.2 mi) deep. Valles Marineris was formed due to the swelling of the Tharsis area, which caused the crust in the area of Valles Marineris to collapse. In 2012, it was proposed that Valles Marineris is not just a graben, but a plate boundary where 150 kilometres (93 mi) of transverse motion has occurred, making Mars a planet with possibly a two-tectonic plate arrangement. Images from the Thermal Emission Imaging System (THEMIS) aboard NASA's Mars Odyssey orbiter have revealed seven possible cave entrances on the flanks of the volcano Arsia Mons. The caves, named after loved ones of their discoverers, are collectively known as the "seven sisters". Cave entrances measure from 100 to 252 metres (328 to 827 ft) wide and they are estimated to be at least 73 to 96 metres (240 to 315 ft) deep. Because light does not reach the floor of most of the caves, they may extend much deeper than these lower estimates and widen below the surface. "Dena" is the only exception; its floor is visible and was measured to be 130 metres (430 ft) deep. The interiors of these caverns may be protected from micrometeoroids, UV radiation, solar flares and high energy particles that bombard the planet's surface. Martian geysers (or CO2 jets) are putative sites of small gas and dust eruptions that occur in the south polar region of Mars during the spring thaw. "Dark dune spots" and "spiders" – or araneiforms – are the two most visible types of features ascribed to these eruptions. Similarly sized dust will settle from the thinner Martian atmosphere sooner than it would on Earth. For example, the dust suspended by the 2001 global dust storms on Mars only remained in the Martian atmosphere for 0.6 years, while the dust from Mount Pinatubo took about two years to settle. However, under current Martian conditions, the mass movements involved are generally much smaller than on Earth. Even the 2001 global dust storms on Mars moved only the equivalent of a very thin dust layer – about 3 μm thick if deposited with uniform thickness between 58° north and south of the equator. Dust deposition at the two rover sites has proceeded at a rate of about the thickness of a grain every 100 sols. Atmosphere Mars lost its magnetosphere 4 billion years ago, possibly because of numerous asteroid strikes, so the solar wind interacts directly with the Martian ionosphere, lowering the atmospheric density by stripping away atoms from the outer layer. Both Mars Global Surveyor and Mars Express have detected ionized atmospheric particles trailing off into space behind Mars, and this atmospheric loss is being studied by the MAVEN orbiter. Compared to Earth, the atmosphere of Mars is quite rarefied. Atmospheric pressure on the surface today ranges from a low of 30 Pa (0.0044 psi) on Olympus Mons to over 1,155 Pa (0.1675 psi) in Hellas Planitia, with a mean pressure at the surface level of 600 Pa (0.087 psi). The highest atmospheric density on Mars is equal to that found 35 kilometres (22 mi) above Earth's surface. The resulting mean surface pressure is only 0.6% of Earth's 101.3 kPa (14.69 psi). The scale height of the atmosphere is about 10.8 kilometres (6.7 mi), which is higher than Earth's 6 kilometres (3.7 mi), because the surface gravity of Mars is only about 38% of Earth's. The atmosphere of Mars consists of about 96% carbon dioxide, 1.93% argon and 1.89% nitrogen along with traces of oxygen and water. The atmosphere is quite dusty, containing particulates about 1.5 μm in diameter which give the Martian sky a tawny color when seen from the surface. It may take on a pink hue due to iron oxide particles suspended in it. Despite repeated detections of methane on Mars, there is no scientific consensus as to its origin. One suggestion is that methane exists on Mars and that its concentration fluctuates seasonally. The existence of methane could be produced by non-biological process such as serpentinization involving water, carbon dioxide, and the mineral olivine, which is known to be common on Mars, or by Martian life. Compared to Earth, its higher concentration of atmospheric CO2 and lower surface pressure may be why sound is attenuated more on Mars, where natural sources are rare apart from the wind. Using acoustic recordings collected by the Perseverance rover, researchers concluded that the speed of sound there is approximately 240 m/s for frequencies below 240 Hz, and 250 m/s for those above. Auroras have been detected on Mars. Because Mars lacks a global magnetic field, the types and distribution of auroras there differ from those on Earth; rather than being mostly restricted to polar regions as is the case on Earth, a Martian aurora can encompass the planet. In September 2017, NASA reported radiation levels on the surface of the planet Mars were temporarily doubled, and were associated with an aurora 25 times brighter than any observed earlier, due to a massive, and unexpected, solar storm in the middle of the month. Mars has seasons, alternating between its northern and southern hemispheres, similar to on Earth. Additionally the orbit of Mars has, compared to Earth's, a large eccentricity and approaches perihelion when it is summer in its southern hemisphere and winter in its northern, and aphelion when it is winter in its southern hemisphere and summer in its northern. As a result, the seasons in its southern hemisphere are more extreme and the seasons in its northern are milder than would otherwise be the case. The summer temperatures in the south can be warmer than the equivalent summer temperatures in the north by up to 30 °C (54 °F). Martian surface temperatures vary from lows of about −110 °C (−166 °F) to highs of up to 35 °C (95 °F) in equatorial summer. The wide range in temperatures is due to the thin atmosphere which cannot store much solar heat, the low atmospheric pressure (about 1% that of the atmosphere of Earth), and the low thermal inertia of Martian soil. The planet is 1.52 times as far from the Sun as Earth, resulting in just 43% of the amount of sunlight. Mars has the largest dust storms in the Solar System, reaching speeds of over 160 km/h (100 mph). These can vary from a storm over a small area, to gigantic storms that cover the entire planet. They tend to occur when Mars is closest to the Sun, and have been shown to increase global temperature. Seasons also produce dry ice covering polar ice caps. Hydrology While Mars contains water in larger amounts, most of it is dust covered water ice at the Martian polar ice caps. The volume of water ice in the south polar ice cap, if melted, would be enough to cover most of the surface of the planet with a depth of 11 metres (36 ft). Water in its liquid form cannot persist on the surface due to Mars's low atmospheric pressure, which is less than 1% that of Earth. Only at the lowest of elevations are the pressure and temperature high enough for liquid water to exist for short periods. Although little water is present in the atmosphere, there is enough to produce clouds of water ice and different cases of snow and frost, often mixed with snow of carbon dioxide dry ice. Landforms visible on Mars strongly suggest that liquid water has existed on the planet's surface. Huge linear swathes of scoured ground, known as outflow channels, cut across the surface in about 25 places. These are thought to be a record of erosion caused by the catastrophic release of water from subsurface aquifers, though some of these structures have been hypothesized to result from the action of glaciers or lava. One of the larger examples, Ma'adim Vallis, is 700 kilometres (430 mi) long, much greater than the Grand Canyon, with a width of 20 kilometres (12 mi) and a depth of 2 kilometres (1.2 mi) in places. It is thought to have been carved by flowing water early in Mars's history. The youngest of these channels is thought to have formed only a few million years ago. Elsewhere, particularly on the oldest areas of the Martian surface, finer-scale, dendritic networks of valleys are spread across significant proportions of the landscape. Features of these valleys and their distribution strongly imply that they were carved by runoff resulting from precipitation in early Mars history. Subsurface water flow and groundwater sapping may play important subsidiary roles in some networks, but precipitation was probably the root cause of the incision in almost all cases. Along craters and canyon walls, there are thousands of features that appear similar to terrestrial gullies. The gullies tend to be in the highlands of the Southern Hemisphere and face the Equator; all are poleward of 30° latitude. A number of authors have suggested that their formation process involves liquid water, probably from melting ice, although others have argued for formation mechanisms involving carbon dioxide frost or the movement of dry dust. No partially degraded gullies have formed by weathering and no superimposed impact craters have been observed, indicating that these are young features, possibly still active. Other geological features, such as deltas and alluvial fans preserved in craters, are further evidence for warmer, wetter conditions at an interval or intervals in earlier Mars history. Such conditions necessarily require the widespread presence of crater lakes across a large proportion of the surface, for which there is independent mineralogical, sedimentological and geomorphological evidence. Further evidence that liquid water once existed on the surface of Mars comes from the detection of specific minerals such as hematite and goethite, both of which sometimes form in the presence of water. The chemical signature of water vapor on Mars was first unequivocally demonstrated in 1963 by spectroscopy using an Earth-based telescope. In 2004, Opportunity detected the mineral jarosite. This forms only in the presence of acidic water, showing that water once existed on Mars. The Spirit rover found concentrated deposits of silica in 2007 that indicated wet conditions in the past, and in December 2011, the mineral gypsum, which also forms in the presence of water, was found on the surface by NASA's Mars rover Opportunity. It is estimated that the amount of water in the upper mantle of Mars, represented by hydroxyl ions contained within Martian minerals, is equal to or greater than that of Earth at 50–300 parts per million of water, which is enough to cover the entire planet to a depth of 200–1,000 metres (660–3,280 ft). On 18 March 2013, NASA reported evidence from instruments on the Curiosity rover of mineral hydration, likely hydrated calcium sulfate, in several rock samples including the broken fragments of "Tintina" rock and "Sutton Inlier" rock as well as in veins and nodules in other rocks like "Knorr" rock and "Wernicke" rock. Analysis using the rover's DAN instrument provided evidence of subsurface water, amounting to as much as 4% water content, down to a depth of 60 centimetres (24 in), during the rover's traverse from the Bradbury Landing site to the Yellowknife Bay area in the Glenelg terrain. In September 2015, NASA announced that they had found strong evidence of hydrated brine flows in recurring slope lineae, based on spectrometer readings of the darkened areas of slopes. These streaks flow downhill in Martian summer, when the temperature is above −23 °C, and freeze at lower temperatures. These observations supported earlier hypotheses, based on timing of formation and their rate of growth, that these dark streaks resulted from water flowing just below the surface. However, later work suggested that the lineae may be dry, granular flows instead, with at most a limited role for water in initiating the process. A definitive conclusion about the presence, extent, and role of liquid water on the Martian surface remains elusive. Researchers suspect much of the low northern plains of the planet were covered with an ocean hundreds of meters deep, though this theory remains controversial. In March 2015, scientists stated that such an ocean might have been the size of Earth's Arctic Ocean. This finding was derived from the ratio of protium to deuterium in the modern Martian atmosphere compared to that ratio on Earth. The amount of Martian deuterium (D/H = 9.3 ± 1.7 10−4) is five to seven times the amount on Earth (D/H = 1.56 10−4), suggesting that ancient Mars had significantly higher levels of water. Results from the Curiosity rover had previously found a high ratio of deuterium in Gale Crater, though not significantly high enough to suggest the former presence of an ocean. Other scientists caution that these results have not been confirmed, and point out that Martian climate models have not yet shown that the planet was warm enough in the past to support bodies of liquid water. Near the northern polar cap is the 81.4 kilometres (50.6 mi) wide Korolev Crater, which the Mars Express orbiter found to be filled with approximately 2,200 cubic kilometres (530 cu mi) of water ice. In November 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region. The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior (which is 12,100 cubic kilometers). During observations from 2018 through 2021, the ExoMars Trace Gas Orbiter spotted indications of water, probably subsurface ice, in the Valles Marineris canyon system. Orbital motion Mars's average distance from the Sun is roughly 230 million km (143 million mi), and its orbital period is 687 (Earth) days. The solar day (or sol) on Mars is only slightly longer than an Earth day: 24 hours, 39 minutes, and 35.244 seconds. A Martian year is equal to 1.8809 Earth years, or 1 year, 320 days, and 18.2 hours. The gravitational potential difference and thus the delta-v needed to transfer between Mars and Earth is the second lowest for Earth. The axial tilt of Mars is 25.19° relative to its orbital plane, which is similar to the axial tilt of Earth. As a result, Mars has seasons like Earth, though on Mars they are nearly twice as long because its orbital period is that much longer. In the present day, the orientation of the north pole of Mars is close to the star Deneb. Mars has a relatively pronounced orbital eccentricity of about 0.09; of the seven other planets in the Solar System, only Mercury has a larger orbital eccentricity. It is known that in the past, Mars has had a much more circular orbit. At one point, 1.35 million Earth years ago, Mars had an eccentricity of roughly 0.002, much less than that of Earth today. Mars's cycle of eccentricity is 96,000 Earth years compared to Earth's cycle of 100,000 years. Mars has its closest approach to Earth (opposition) in a synodic period of 779.94 days. It should not be confused with Mars conjunction, where the Earth and Mars are at opposite sides of the Solar System and form a straight line crossing the Sun. The average time between the successive oppositions of Mars, its synodic period, is 780 days; but the number of days between successive oppositions can range from 764 to 812. The distance at close approach varies between about 54 and 103 million km (34 and 64 million mi) due to the planets' elliptical orbits, which causes comparable variation in angular size. At their furthest Mars and Earth can be as far as 401 million km (249 million mi) apart. Mars comes into opposition from Earth every 2.1 years. The planets come into opposition near Mars's perihelion in 2003, 2018 and 2035, with the 2020 and 2033 events being particularly close to perihelic opposition. The mean apparent magnitude of Mars is +0.71 with a standard deviation of 1.05. Because the orbit of Mars is eccentric, the magnitude at opposition from the Sun can range from about −3.0 to −1.4. The minimum brightness is magnitude +1.86 when the planet is near aphelion and in conjunction with the Sun. At its brightest, Mars (along with Jupiter) is second only to Venus in apparent brightness. Mars usually appears distinctly yellow, orange, or red. When farthest away from Earth, it is more than seven times farther away than when it is closest. Mars is usually close enough for particularly good viewing once or twice at 15-year or 17-year intervals. Optical ground-based telescopes are typically limited to resolving features about 300 kilometres (190 mi) across when Earth and Mars are closest because of Earth's atmosphere. As Mars approaches opposition, it begins a period of retrograde motion, which means it will appear to move backwards in a looping curve with respect to the background stars. This retrograde motion lasts for about 72 days, and Mars reaches its peak apparent brightness in the middle of this interval. Moons Mars has two relatively small (compared to Earth's) natural moons, Phobos (about 22 km (14 mi) in diameter) and Deimos (about 12 km (7.5 mi) in diameter), which orbit at 9,376 km (5,826 mi) and 23,460 km (14,580 mi) around the planet. The origin of both moons is unclear, although a popular theory states that they were asteroids captured into Martian orbit. Both satellites were discovered in 1877 by Asaph Hall and were named after the characters Phobos (the deity of panic and fear) and Deimos (the deity of terror and dread), twins from Greek mythology who accompanied their father Ares, god of war, into battle. Mars was the Roman equivalent to Ares. In modern Greek, the planet retains its ancient name Ares (Aris: Άρης). From the surface of Mars, the motions of Phobos and Deimos appear different from that of the Earth's satellite, the Moon. Phobos rises in the west, sets in the east, and rises again in just 11 hours. Deimos, being only just outside synchronous orbit – where the orbital period would match the planet's period of rotation – rises as expected in the east, but slowly. Because the orbit of Phobos is below a synchronous altitude, tidal forces from Mars are gradually lowering its orbit. In about 50 million years, it could either crash into Mars's surface or break up into a ring structure around the planet. The origin of the two satellites is not well understood. Their low albedo and carbonaceous chondrite composition have been regarded as similar to asteroids, supporting a capture theory. The unstable orbit of Phobos would seem to point toward a relatively recent capture. But both have circular orbits near the equator, which is unusual for captured objects, and the required capture dynamics are complex. Accretion early in the history of Mars is plausible, but would not account for a composition resembling asteroids rather than Mars itself, if that is confirmed. Mars may have yet-undiscovered moons, smaller than 50 to 100 metres (160 to 330 ft) in diameter, and a dust ring is predicted to exist between Phobos and Deimos. A third possibility for their origin as satellites of Mars is the involvement of a third body or a type of impact disruption. More-recent lines of evidence for Phobos having a highly porous interior, and suggesting a composition containing mainly phyllosilicates and other minerals known from Mars, point toward an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's satellite. Although the visible and near-infrared (VNIR) spectra of the moons of Mars resemble those of outer-belt asteroids, the thermal infrared spectra of Phobos are reported to be inconsistent with chondrites of any class. It is also possible that Phobos and Deimos were fragments of an older moon, formed by debris from a large impact on Mars, and then destroyed by a more recent impact upon the satellite. More recently, a study conducted by a team of researchers from multiple countries suggests that a lost moon, at least fifteen times the size of Phobos, may have existed in the past. By analyzing rocks which point to tidal processes on the planet, it is possible that these tides may have been regulated by a past moon. Human observations and exploration The history of observations of Mars is marked by oppositions of Mars when the planet is closest to Earth and hence is most easily visible, which occur every couple of years. Even more notable are the perihelic oppositions of Mars, which are distinguished because Mars is close to perihelion, making it even closer to Earth. The ancient Sumerians named Mars Nergal, the god of war and plague. During Sumerian times, Nergal was a minor deity of little significance, but, during later times, his main cult center was the city of Nineveh. In Mesopotamian texts, Mars is referred to as the "star of judgement of the fate of the dead". The existence of Mars as a wandering object in the night sky was also recorded by the ancient Egyptian astronomers and, by 1534 BCE, they were familiar with the retrograde motion of the planet. By the period of the Neo-Babylonian Empire, the Babylonian astronomers were making regular records of the positions of the planets and systematic observations of their behavior. For Mars, they knew that the planet made 37 synodic periods, or 42 circuits of the zodiac, every 79 years. They invented arithmetic methods for making minor corrections to the predicted positions of the planets. In Ancient Greece, the planet was known as Πυρόεις. Commonly, the Greek name for the planet now referred to as Mars, was Ares. It was the Romans who named the planet Mars, for their god of war, often represented by the sword and shield of the planet's namesake. In the fourth century BCE, Aristotle noted that Mars disappeared behind the Moon during an occultation, indicating that the planet was farther away. Ptolemy, a Greek living in Alexandria, attempted to address the problem of the orbital motion of Mars. Ptolemy's model and his collective work on astronomy was presented in the multi-volume collection later called the Almagest (from the Arabic for "greatest"), which became the authoritative treatise on Western astronomy for the next fourteen centuries. Literature from ancient China confirms that Mars was known by Chinese astronomers by no later than the fourth century BCE. In the East Asian cultures, Mars is traditionally referred to as the "fire star" (火星) based on the Wuxing system. In 1609 Johannes Kepler published a 10 year study of Martian orbit, using the diurnal parallax of Mars, measured by Tycho Brahe, to make a preliminary calculation of the relative distance to the planet. From Brahe's observations of Mars, Kepler deduced that the planet orbited the Sun not in a circle, but in an ellipse. Moreover, Kepler showed that Mars sped up as it approached the Sun and slowed down as it moved farther away, in a manner that later physicists would explain as a consequence of the conservation of angular momentum.: 433–437 In 1610 the first use of a telescope for astronomical observation, including Mars, was performed by Italian astronomer Galileo Galilei. With the telescope the diurnal parallax of Mars was again measured in an effort to determine the Sun-Earth distance. This was first performed by Giovanni Domenico Cassini in 1672. The early parallax measurements were hampered by the quality of the instruments. The only occultation of Mars by Venus observed was that of 13 October 1590, seen by Michael Maestlin at Heidelberg. By the 19th century, the resolution of telescopes reached a level sufficient for surface features to be identified. On 5 September 1877, a perihelic opposition to Mars occurred. The Italian astronomer Giovanni Schiaparelli used a 22-centimetre (8.7 in) telescope in Milan to help produce the first detailed map of Mars. These maps notably contained features he called canali, which, with the possible exception of the natural canyon Valles Marineris, were later shown to be an optical illusion. These canali were supposedly long, straight lines on the surface of Mars, to which he gave names of famous rivers on Earth. His term, which means "channels" or "grooves", was popularly mistranslated in English as "canals". Influenced by the observations, the orientalist Percival Lowell founded an observatory which had 30- and 45-centimetre (12- and 18-in) telescopes. The observatory was used for the exploration of Mars during the last good opportunity in 1894, and the following less favorable oppositions. He published several books on Mars and life on the planet, which had a great influence on the public. The canali were independently observed by other astronomers, like Henri Joseph Perrotin and Louis Thollon in Nice, using one of the largest telescopes of that time. The seasonal changes (consisting of the diminishing of the polar caps and the dark areas formed during Martian summers) in combination with the canals led to speculation about life on Mars, and it was a long-held belief that Mars contained vast seas and vegetation. As bigger telescopes were used, fewer long, straight canali were observed. During observations in 1909 by Antoniadi with an 84-centimetre (33 in) telescope, irregular patterns were observed, but no canali were seen. The first spacecraft from Earth to visit Mars was Mars 1 of the Soviet Union, which flew by in 1963, but contact was lost en route. NASA's Mariner 4 followed and became the first spacecraft to successfully transmit from Mars; launched on 28 November 1964, it made its closest approach to the planet on 15 July 1965. Mariner 4 detected the weak Martian radiation belt, measured at about 0.1% that of Earth, and captured the first images of another planet from deep space. Once spacecraft visited the planet during the 1960s and 1970s, many previous concepts of Mars were radically broken. After the results of the Viking life-detection experiments, the hypothesis of a dead planet was generally accepted. The data from Mariner 9 and Viking allowed better maps of Mars to be made. Until 1997 and after Viking 1 shut down in 1982, Mars was only visited by three unsuccessful probes, two flying past without contact (Phobos 1, 1988; Mars Observer, 1993), and one (Phobos 2 1989) malfunctioning in orbit before reaching its destination Phobos. In 1997 Mars Pathfinder became the first successful rover mission beyond the Moon and started together with Mars Global Surveyor (operated until late 2006) an uninterrupted active robotic presence at Mars that has lasted until today. It produced complete, extremely detailed maps of the Martian topography, magnetic field and surface minerals. Starting with these missions a range of new improved crewless spacecraft, including orbiters, landers, and rovers, have been sent to Mars, with successful missions by the NASA (United States), Jaxa (Japan), ESA, United Kingdom, ISRO (India), Roscosmos (Russia), the United Arab Emirates, and CNSA (China) to study the planet's surface, climate, and geology, uncovering the different elements of the history and dynamic of the hydrosphere of Mars and possible traces of ancient life. As of 2023[update], Mars is host to ten functioning spacecraft. Eight are in orbit: 2001 Mars Odyssey, Mars Express, Mars Reconnaissance Orbiter, MAVEN, ExoMars Trace Gas Orbiter, the Hope orbiter, and the Tianwen-1 orbiter. Another two are on the surface: the Mars Science Laboratory Curiosity rover and the Perseverance rover. Collected maps are available online at websites including Google Mars. NASA provides two online tools: Mars Trek, which provides visualizations of the planet using data from 50 years of exploration, and Experience Curiosity, which simulates traveling on Mars in 3-D with Curiosity. Planned missions to Mars include: As of February 2024[update], debris from these types of missions has reached over seven tons. Most of it consists of crashed and inactive spacecraft as well as discarded components. In April 2024, NASA selected several companies to begin studies on providing commercial services to further enable robotic science on Mars. Key areas include establishing telecommunications, payload delivery and surface imaging. Habitability and habitation During the late 19th century, it was widely accepted in the astronomical community that Mars had life-supporting qualities, including the presence of oxygen and water. However, in 1894 W. W. Campbell at Lick Observatory observed the planet and found that "if water vapor or oxygen occur in the atmosphere of Mars it is in quantities too small to be detected by spectroscopes then available". That observation contradicted many of the measurements of the time and was not widely accepted. Campbell and V. M. Slipher repeated the study in 1909 using better instruments, but with the same results. It was not until the findings were confirmed by W. S. Adams in 1925 that the myth of the Earth-like habitability of Mars was finally broken. However, even in the 1960s, articles were published on Martian biology, putting aside explanations other than life for the seasonal changes on Mars. The current understanding of planetary habitability – the ability of a world to develop environmental conditions favorable to the emergence of life – favors planets that have liquid water on their surface. Most often this requires the orbit of a planet to lie within the habitable zone, which for the Sun is estimated to extend from within the orbit of Earth to about that of Mars. During perihelion, Mars dips inside this region, but Mars's thin (low-pressure) atmosphere prevents liquid water from existing over large regions for extended periods. The past flow of liquid water demonstrates the planet's potential for habitability. Recent evidence has suggested that any water on the Martian surface may have been too salty and acidic to support regular terrestrial life. The environmental conditions on Mars are a challenge to sustaining organic life: the planet has little heat transfer across its surface, it has poor insulation against bombardment by the solar wind due to the absence of a magnetosphere and has insufficient atmospheric pressure to retain water in a liquid form (water instead sublimes to a gaseous state). Mars is nearly, or perhaps totally, geologically dead; the end of volcanic activity has apparently stopped the recycling of chemicals and minerals between the surface and interior of the planet. Evidence suggests that the planet was once significantly more habitable than it is today, but whether living organisms ever existed there remains unknown. The Viking probes of the mid-1970s carried experiments designed to detect microorganisms in Martian soil at their respective landing sites and had positive results, including a temporary increase in CO2 production on exposure to water and nutrients. This sign of life was later disputed by scientists, resulting in a continuing debate, with NASA scientist Gilbert Levin asserting that Viking may have found life. A 2014 analysis of Martian meteorite EETA79001 found chlorate, perchlorate, and nitrate ions in sufficiently high concentrations to suggest that they are widespread on Mars. UV and X-ray radiation would turn chlorate and perchlorate ions into other, highly reactive oxychlorines, indicating that any organic molecules would have to be buried under the surface to survive. Small quantities of methane and formaldehyde detected by Mars orbiters are both claimed to be possible evidence for life, as these chemical compounds would quickly break down in the Martian atmosphere. Alternatively, these compounds may instead be replenished by volcanic or other geological means, such as serpentinite. Impact glass, formed by the impact of meteors, which on Earth can preserve signs of life, has also been found on the surface of the impact craters on Mars. Likewise, the glass in impact craters on Mars could have preserved signs of life, if life existed at the site. The Cheyava Falls rock discovered on Mars in June 2024 has been designated by NASA as a "potential biosignature" and was core sampled by the Perseverance rover for possible return to Earth and further examination. Although highly intriguing, no definitive final determination on a biological or abiotic origin of this rock can be made with the data currently available. Several plans for a human mission to Mars have been proposed, but none have come to fruition. The NASA Authorization Act of 2017 directed NASA to study the feasibility of a crewed Mars mission in the early 2030s; the resulting report concluded that this would be unfeasible. In addition, in 2021, China was planning to send a crewed Mars mission in 2033. Privately held companies such as SpaceX have also proposed plans to send humans to Mars, with the eventual goal to settle on the planet. As of 2024, SpaceX has proceeded with the development of the Starship launch vehicle with the goal of Mars colonization. In plans shared with the company in April 2024, Elon Musk envisions the beginning of a Mars colony within the next twenty years. This would be enabled by the planned mass manufacturing of Starship and initially sustained by resupply from Earth, and in situ resource utilization on Mars, until the Mars colony reaches full self sustainability. Any future human mission to Mars will likely take place within the optimal Mars launch window, which occurs every 26 months. The moon Phobos has been proposed as an anchor point for a space elevator. Besides national space agencies and space companies, groups such as the Mars Society and The Planetary Society advocate for human missions to Mars. In culture Mars is named after the Roman god of war (Greek Ares), but was also associated with the demi-god Heracles (Roman Hercules) by ancient Greek astronomers, as detailed by Aristotle. This association between Mars and war dates back at least to Babylonian astronomy, in which the planet was named for the god Nergal, deity of war and destruction. It persisted into modern times, as exemplified by Gustav Holst's orchestral suite The Planets, whose famous first movement labels Mars "The Bringer of War". The planet's symbol, a circle with a spear pointing out to the upper right, is also used as a symbol for the male gender. The symbol dates from at least the 11th century, though a possible predecessor has been found in the Greek Oxyrhynchus Papyri. The idea that Mars was populated by intelligent Martians became widespread in the late 19th century. Schiaparelli's "canali" observations combined with Percival Lowell's books on the subject put forward the standard notion of a planet that was a drying, cooling, dying world with ancient civilizations constructing irrigation works. Many other observations and proclamations by notable personalities added to what has been termed "Mars Fever". In the present day, high-resolution mapping of the surface of Mars has revealed no artifacts of habitation, but pseudoscientific speculation about intelligent life on Mars still continues. Reminiscent of the canali observations, these speculations are based on small scale features perceived in the spacecraft images, such as "pyramids" and the "Face on Mars". In his book Cosmos, planetary astronomer Carl Sagan wrote: "Mars has become a kind of mythic arena onto which we have projected our Earthly hopes and fears." The depiction of Mars in fiction has been stimulated by its dramatic red color and by nineteenth-century scientific speculations that its surface conditions might support not just life but intelligent life. This gave way to many science fiction stories involving these concepts, such as H. G. Wells's The War of the Worlds, in which Martians seek to escape their dying planet by invading Earth; Ray Bradbury's The Martian Chronicles, in which human explorers accidentally destroy a Martian civilization; as well as Edgar Rice Burroughs's series Barsoom, C. S. Lewis's novel Out of the Silent Planet (1938), and a number of Robert A. Heinlein stories before the mid-sixties. Since then, depictions of Martians have also extended to animation. A comic figure of an intelligent Martian, Marvin the Martian, appeared in Haredevil Hare (1948) as a character in the Looney Tunes animated cartoons of Warner Brothers, and has continued as part of popular culture to the present. After the Mariner and Viking spacecraft had returned pictures of Mars as a lifeless and canal-less world, these ideas about Mars were abandoned; for many science-fiction authors, the new discoveries initially seemed like a constraint, but eventually the post-Viking knowledge of Mars became itself a source of inspiration for works like Kim Stanley Robinson's Mars trilogy. See also Notes References Further reading External links Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Local Volume → Virgo Supercluster → Laniakea Supercluster → Pisces–Cetus Supercluster Complex → Local Hole → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
========================================
[SOURCE: https://en.wikipedia.org/wiki/GunCon] | [TOKENS: 1416]
Contents GunCon The GunCon[a], known as the G-Con in Europe, is a family of gun peripherals designed by Namco for the PlayStation consoles. The original controllers used traditional light gun technology, while newer controllers use LED tracking technology. Background The first GunCon NPC-103 (G-Con 45 in Europe) was bundled with the PlayStation conversion of Time Crisis. To make the gun affordable to consumers, the force feedback feature of the Time Crisis arcade gun was omitted, and an additional fire button was included in lieu of releasing a pedal controller for the game's ducking mechanic. A second version of the GunCon, known as the GunCon 2 NPC-106 (G-Con 2 in Europe), was bundled with the PlayStation 2 conversion of Time Crisis II and Time Crisis 3. Time Crisis 4 came out for the PlayStation 3 bundled with the GunCon 3 NC-109 (G-Con 3 in Europe). In Japan, all three GunCon models were also available for sale as a separate accessory outside of a game bundle. The GunCon was preceded by the Hyper Blaster (sold as the Justifier in North America) manufactured by Konami, which was the first light gun peripheral for the PlayStation. Konami's Hyper Blaster and Namco's Guncon were mutually incompatible due to the fact that the Hyper Blaster only requires to be plugged into either of the console's controller ports in order to work, whereas the Guncon also requires a connection into the console's video output port in order to synchronize with its video signals for better accuracy. Initially the GunCon was only designed to support Namco-developed titles, but other developers immediately started adopting it for their own gun shooting titles as well, eventually phasing out Konami's peripheral as the console's primary gun peripheral. A few titles, namely Die Hard Trilogy 2: Viva Las Vegas, Elemental Gearbolt, Maximum Force, and Mighty Hits Special, features support for both peripherals, while certain third-party light guns were also produced that support switching between Hyper Blaster and GunCon modes. Models The GunCon controller (known as G-Con 45 in Europe) uses the cathode ray timing method to determine where the barrel is aimed at on the screen when the trigger is pulled. It features a button below the barrel on either side of the gun (buttons A and B, both performing the same function) for auxiliary in-game control, such as to take cover and reload in Time Crisis. The controller is released in black in Japan, and gray (and eventually, in orange) in both Europe and North America. The controller is compatible with some PlayStation 2 GunCon titles, but is not compatible with PlayStation 3 due to its lack of controller ports. Many games that support it allow the A and B buttons to be swapped, making it comfortable for both right and left-handed players. The GunCon connect to the PlayStation console not only via its standard controller port, but also into its RCA video output jack in order to read its video signals for better accuracy. For later models of the console (from the SCPH-5500 and onward), which lacked the RCA output ports and only featured Sony's proprietary AV output port, as well as the PS one models and all PS2 consoles, a PlayStation AV Adapter (SCPH-1160) or any generic AV extension cable is required to use the original GunCon on those particular models. The AV Adapter was also required for the earlier consoles if an RF connection was being used instead. GunCon 2 (G-Con 2 in Europe) features a smaller body, as well as a more rounded shape when compared with the original GunCon. The side buttons, A and B, have been moved rearward to a position directly above the trigger. Two new smaller buttons, SELECT and START, have been added to the left side of the shaft. Prominent additions to this second GunCon model is a D-pad at the back of the gun barrel and a C button added at the bottom of the gun handle. These new buttons served to open new gameplay opportunities, such as character movement in Dino Stalker or the ability to use two guns at once in Time Crisis II. The gun uses a USB connection as opposed to a PlayStation controller port of the GunCon 1 and also hooks into the video signal of the console (either composite video or the Y signal of component video). The controller is released in black in Japan, blue in Europe, and orange in North America. It is not compatible with original PlayStation titles or PlayStation 3 titles. The GunCon 2, with compatible games, can work on older models of the PlayStation 3 featuring any form of hardware-based PlayStation 2 backwards compatibility. The GunCon 3 utilizes two infrared LED lights as markers, placed on the left and right sides of the screen. An image sensor in the muzzle tracks the markers as reference points for determining where the gun is pointing on the screen. As opposed to the GunCon and GunCon 2, which are only compatible with CRT-based displays, the GunCon 3 supports a wide variety of display types, including LCD and Plasma. The GunCon 3 features a "sub-grip", mounted underneath the barrel and extending to the left side for use with the left hand. On the sub-grip is an analog stick and two shoulder buttons, like in a modern gamepad. At the back end of the gun barrel is another analog stick and two buttons, B1 and B2, underneath. Another two buttons, C1 and C2, are placed along the left side of the barrel. The analog sticks allow the player to play first-person shooting games with manual aiming/firing of the light gun. Compatible video games with GunCon Some GunCon 2 (PS2) games are compatible with the original GunCon, unless the game utilizes the extra buttons on the GunCon 2. iGunCon iGunCon for iOS was released on July 21, 2011, which allows players to use an iPhone or iPod Touch in a similar fashion to the GunCon on Time Crisis 2nd Strike, an iOS exclusive entry in the Time Crisis series. iGunCon, along with Time Crisis 2nd Strike, was pulled from the app store in March of 2015. Reception Electronic Gaming Monthly's four-person "review crew" gave the original GunCon scores of 7.5, 7.0, 8.0, and 7.5 out of 10. They criticized Namco's decisions to make it compatible only with Namco games and make Namco games incompatible with other light guns, but praised the GunCon's extreme precision and accuracy, in particular when firing near the edge of the screen (a common trouble spot for light guns). Lead reviewer Crispin Boyer was also pleased with the low price of the GunCon/Time Crisis bundle. VG247 called GunCon 3's design "hideous". See also Notes References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Verilog] | [TOKENS: 3522]
Contents Verilog Verilog, standardized as IEEE 1364, is a hardware description language (HDL) used to model electronic systems. It is most commonly used in the design and verification of digital circuits, with the highest level of abstraction being at the register-transfer level. It is also used in the verification of analog circuits and mixed-signal circuits, as well as in the design of genetic circuits. In 2009, the Verilog standard (IEEE 1364-2005) was merged into the SystemVerilog standard, creating IEEE Standard 1800-2009. Since then, Verilog has been officially part of the SystemVerilog language. The current version is IEEE standard 1800-2023. Overview Hardware description languages such as Verilog use software-like syntax, but unlike software programming languages they model physical hardware, including concurrent operation, signal timing, and electrical behavior. There are two types of assignment operators; a blocking assignment (=), and a non-blocking (<=) assignment. The non-blocking assignment allows designers to describe a state-machine update without needing to declare and use temporary storage variables. Since these concepts are part of Verilog's language semantics, designers could quickly write descriptions of large circuits in a relatively compact and concise form. At the time of Verilog's introduction (1984), Verilog represented a tremendous productivity improvement for circuit designers who were already using graphical schematic capture software and specially written software programs to document and simulate electronic circuits. The designers of Verilog wanted a language with syntax similar to the C programming language, which was already widely used in engineering software development. Like C, Verilog is case-sensitive and has a basic preprocessor (though less sophisticated than that of ANSI C/C++). Its control flow keywords (if/else, for, while, case, etc.) are equivalent, and its operator precedence is compatible with C. Syntactic differences include: required bit-widths for variable declarations, demarcation of procedural blocks (Verilog uses begin/end instead of curly braces {}), and many other minor differences. Verilog requires that variables be given a definite size. In C these sizes are inferred from the 'type' of the variable (for instance an integer type may be 32 bits). A Verilog design consists of a hierarchy of modules. Modules encapsulate design hierarchy, and communicate with other modules through a set of declared input, output, and bidirectional ports. Internally, a module can contain any combination of the following: net/variable declarations (wire, reg, integer, etc.), concurrent and sequential statement blocks, and instances of other modules (sub-hierarchies). Sequential statements are placed inside a begin/end block and executed in sequential order within the block. However, the blocks themselves are executed concurrently, making Verilog a dataflow language. Verilog's concept of 'wire' consists of both signal values (4-state: "1, 0, floating, undefined") and signal strengths (strong, weak, etc.). This system allows abstract modeling of shared signal lines, where multiple sources drive a common net. When a wire has multiple drivers, the wire's (readable) value is resolved by a function of the source drivers and their strengths. A subset of statements in the Verilog language are synthesizable. Verilog modules that conform to a synthesizable coding style, known as RTL (register-transfer level), can be physically realized by synthesis software. Synthesis software algorithmically transforms the (abstract) Verilog source into a netlist, a logically equivalent description consisting only of elementary logic primitives (AND, OR, NOT, flip-flops, etc.) that are available in a specific FPGA or VLSI technology. Further manipulations to the netlist ultimately lead to a circuit fabrication blueprint (such as a photo mask set for an ASIC or a bitstream file for an FPGA). History Verilog was created by Prabhu Goel, Phil Moorby and Chi-Lai Huang between late 1983 and early 1984. Chi-Lai Huang had earlier worked on a hardware description LALSD, a language developed by Professor S.Y.H. Su, for his PhD work. The rights holder for this process, at the time proprietary, was "Automated Integrated Design Systems" (later renamed to Gateway Design Automation in 1985). Gateway Design Automation was purchased by Cadence Design Systems in 1990. Cadence now has full proprietary rights to Gateway's Verilog and the Verilog-XL, the HDL-simulator that would become the de facto standard (of Verilog logic simulators) for the next decade. Originally, Verilog was only intended to describe and allow simulation; the automated synthesis of subsets of the language to physically realizable structures (gates etc.) was developed after the language had achieved widespread usage. Verilog is a portmanteau of the words "verification" and "logic". With the increasing success of VHDL at the time, Cadence decided to make the language available for open standardization. Cadence transferred Verilog into the public domain under the Open Verilog International (OVI) (now known as Accellera) organization. Verilog was later submitted to IEEE and became IEEE Standard 1364-1995, commonly referred to as Verilog-95. In the same time frame Cadence initiated the creation of Verilog-A to put standards support behind its analog simulator Spectre. Verilog-A was never intended to be a standalone language and is a subset of Verilog-AMS which encompassed Verilog-95. Extensions to Verilog-95 were submitted back to IEEE to cover the deficiencies that users had found in the original Verilog standard. These extensions became IEEE Standard 1364-2001 known as Verilog-2001. Verilog-2001 is a significant upgrade from Verilog-95. First, it adds explicit support for (2's complement) signed nets and variables. Previously, code authors had to perform signed operations using awkward bit-level manipulations (for example, the carry-out bit of a simple 8-bit addition required an explicit description of the Boolean algebra to determine its correct value). The same function under Verilog-2001 can be more succinctly described by one of the built-in operators: +, -, /, *, >>>. A generate–endgenerate construct (similar to VHDL's generate–endgenerate) allows Verilog-2001 to control instance and statement instantiation through normal decision operators (case–if–else). Using generate–endgenerate, Verilog-2001 can instantiate an array of instances, with control over the connectivity of the individual instances. File I/O has been improved by several new system tasks. And finally, a few syntax additions were introduced to improve code readability (e.g. always @*, named parameter override, C-style function/task/module header declaration). Verilog-2001 is the version of Verilog supported by the majority of commercial EDA software packages. Not to be confused with SystemVerilog, Verilog 2005 (IEEE Standard 1364-2005) consists of minor corrections, spec clarifications, and a few new language features (such as the uwire keyword). A separate part of the Verilog standard, Verilog-AMS, attempts to integrate analog and mixed signal modeling with traditional Verilog. The advent of hardware verification languages such as OpenVera, and Verisity's e language encouraged the development of Superlog by Co-Design Automation Inc (acquired by Synopsys). The foundations of Superlog and Vera were donated to Accellera, which later became the IEEE standard P1800-2005: SystemVerilog. SystemVerilog is a superset of Verilog-2005, with many new features and capabilities to aid design verification and design modeling. As of 2009, the SystemVerilog and Verilog language standards were merged into SystemVerilog 2009 (IEEE Standard 1800-2009). The SystemVerilog standard was subsequently updated in 2012, 2017, and most recently in December 2023. Example A simple example of two flip-flops follows: The <= operator in Verilog is another aspect of its being a hardware description language as opposed to a normal procedural language. This is known as a "non-blocking" assignment. Its action does not register until after the always block has executed. This means that the order of the assignments is irrelevant and will produce the same result: flop1 and flop2 will swap values every clock. The other assignment operator = is referred to as a blocking assignment. When = assignment is used, for the purposes of logic, the target variable is updated immediately. In the above example, had the statements used the = blocking operator instead of <=, flop1 and flop2 would not have been swapped. Instead, as in traditional programming, the compiler would understand to simply set flop1 equal to flop2 (and subsequently ignore the redundant logic to set flop2 equal to flop1). An example counter circuit follows: An example of delays: The always clause above illustrates the other type of method of use, i.e. it executes whenever any of the entities in the list (the b or e) changes. When one of these changes, a is immediately assigned a new value, and due to the blocking assignment, b is assigned a new value afterward (taking into account the new value of a). After a delay of 5 time units, c is assigned the value of b and the value of c ^ e is tucked away in an invisible store. Then after 6 more time units, d is assigned the value that was tucked away. Signals that are driven from within a process (an initial or always block) must be of type reg. Signals that are driven from outside a process must be of type wire. The keyword reg does not necessarily imply a hardware register. Definition of constants The definition of constants in Verilog supports the addition of a width parameter. The basic syntax is: <size>'<base><value> Where: Examples: Synthesizable constructs There are several statements in Verilog that have no analog in real hardware, such as the $display command. However, the examples presented here are the classic (and limited) subset of the language that has a direct mapping to real gates. The next interesting structure is a transparent latch; it will pass the input to the output when the gate signal is set for "pass-through", and captures the input and stores it upon transition of the gate signal to "hold". The output will remain stable regardless of the input signal while the gate is set to "hold". In the example below the "pass-through" level of the gate would be when the value of the if clause is true, i.e. gate = 1. This is read "if gate is true, the din is fed to latch_out continuously." Once the if clause is false, the last value at latch_out will remain and is independent of the value of din. The flip-flop is the next significant template; in Verilog, the D-flop is the simplest, and it can be modeled as: The significant thing to notice in the example is the use of the non-blocking assignment. A basic rule of thumb is to use <= when there is a posedge or negedge statement within the always clause. A variant of the D-flop is one with an asynchronous reset; there is a convention that the reset state will be the first if clause within the statement. The next variant is including both an asynchronous reset and asynchronous set condition; again the convention comes into play, i.e. the reset term is followed by the set term. Note: If this model is used to model a Set/Reset flip flop then simulation errors can result. Consider the following test sequence of events. 1) reset goes high 2) clk goes high 3) set goes high 4) clk goes high again 5) reset goes low followed by 6) set going low. Assume no setup and hold violations. In this example the always @ statement would first execute when the rising edge of reset occurs which would place q to a value of 0. The next time the always block executes would be the rising edge of clk which again would keep q at a value of 0. The always block then executes when set goes high which because reset is high forces q to remain at 0. This condition may or may not be correct depending on the actual flip flop. However, this is not the main problem with this model. Notice that when reset goes low, that set is still high. In a real flip flop this will cause the output to go to a 1. However, in this model it will not occur because the always block is triggered by rising edges of set and reset – not levels. A different approach may be necessary for set/reset flip flops. The final basic variant is one that implements a D-flop with a mux feeding its input. The mux has a d-input and feedback from the flop itself. This allows a gated load function. Note that there are no "initial" blocks mentioned in this description. There is a split between FPGA and ASIC synthesis tools on this structure. FPGA tools allow initial blocks where reg values are established instead of using a "reset" signal. ASIC synthesis tools don't support such a statement. The reason is that an FPGA's initial state is something that is downloaded into the memory tables of the FPGA. An ASIC is an actual hardware implementation. Initial and always There are two separate ways of declaring a Verilog process. These are the always and the initial keywords. The always keyword indicates a free-running process. The initial keyword indicates a process executes exactly once. Both constructs begin execution at simulator time 0, and both execute until the end of the block. Once an always block has reached its end, it is rescheduled (again). It is a common misconception to believe that an initial block will execute before an always block. In fact, it is better to think of the initial-block as a special-case of the always-block, one which terminates after it completes for the first time. These are the classic uses for these two keywords, but there are two significant additional uses. The most common of these is an always keyword without the @(...) sensitivity list. It is possible to use always as shown below: The always keyword acts similar to the C language construct while(1) {..} in the sense that it will execute forever. The other interesting exception is the use of the initial keyword with the addition of the forever keyword. The example below is functionally identical to the always example above. Fork/join The fork/join pair are used by Verilog to create parallel processes. All statements (or blocks) between a fork/join pair begin execution simultaneously upon execution flow hitting the fork. Execution continues after the join upon completion of the longest running statement or block between the fork and join. The way the above is written, it is possible to have either the sequences "ABC" or "BAC" print out. The order of simulation between the first $write and the second $write depends on the simulator implementation, and may purposefully be randomized by the simulator. This allows the simulation to contain both accidental race conditions as well as intentional non-deterministic behavior. Notice that VHDL cannot dynamically spawn multiple processes like Verilog. Race conditions The order of execution is not always guaranteed within Verilog. This can best be illustrated by a classic example. Consider the code snippet below: Depending on the order of execution of the initial blocks, it could be zero and zero, or alternately zero and some other arbitrary uninitialized value. The $display statement will always execute after both assignment blocks have completed, due to the #1 delay. Operators Note: These operators are not shown in order of precedence. Four-valued logic The IEEE 1364 standard defines a four-valued logic with four states: 0, 1, Z (high impedance), and X (unknown logic value). For the competing VHDL, a dedicated standard for multi-valued logic exists as IEEE 1164 with nine levels. System tasks System tasks are available to handle simple I/O and various design measurement functions during simulation. All system tasks are prefixed with $ to distinguish them from user tasks and functions. This section presents a short list of the most frequently used tasks. It is by no means a comprehensive list. Program Language Interface (PLI) The PLI provides a programmer with a mechanism to transfer control from Verilog to a program function written in C language. It is officially deprecated by IEEE Std 1364-2005 in favor of the newer Verilog Procedural Interface, which completely replaces the PLI. The PLI (now VPI) enables Verilog to cooperate with other programs written in the C language such as test harnesses, instruction set simulators of a microcontroller, debuggers, and so on. For example, it provides the C functions tf_putlongp() and tf_getlongp() which are used to write and read the 64-bit integer argument of the current Verilog task or function, respectively. For 32-bit integers, tf_putp() and tf_getp() are used. Simulation software For information on Verilog simulators, see the list of Verilog simulators. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Lorentz_transformation] | [TOKENS: 21967]
Contents Lorentz transformation In physics, the Lorentz transformations are a six-parameter family of linear transformations from a coordinate frame in spacetime to another frame that moves at a constant velocity relative to the former. The respective inverse transformation is then parameterized by the negative of this velocity. The transformations are named after the Dutch physicist Hendrik Lorentz. The most common form of the transformation, parametrized by the real constant v , {\displaystyle v,} representing a velocity confined to the x-direction, is expressed as t ′ = γ ( t − v x c 2 ) x ′ = γ ( x − v t ) y ′ = y z ′ = z {\displaystyle {\begin{aligned}t'&=\gamma \left(t-{\frac {vx}{c^{2}}}\right)\\x'&=\gamma \left(x-vt\right)\\y'&=y\\z'&=z\end{aligned}}} where (t, x, y, z) and (t′, x′, y′, z′) are the coordinates of an event in two frames with the spatial origins coinciding at t = t′ = 0, where the primed frame is seen from the unprimed frame as moving with speed v along the x-axis, where c is the speed of light, and γ = 1 1 − v 2 / c 2 {\displaystyle \gamma ={\frac {1}{\sqrt {1-v^{2}/c^{2}}}}} is the Lorentz factor. When speed v is much smaller than c, the Lorentz factor is negligibly different from 1, but as v approaches c, γ {\displaystyle \gamma } grows without bound. The value of v must be smaller than c for the transformation to make sense. Expressing the speed as a fraction of the speed of light, β = v / c , {\textstyle \beta =v/c,} an equivalent form of the transformation is c t ′ = γ ( c t − β x ) x ′ = γ ( x − β c t ) y ′ = y z ′ = z . {\displaystyle {\begin{aligned}ct'&=\gamma \left(ct-\beta x\right)\\x'&=\gamma \left(x-\beta ct\right)\\y'&=y\\z'&=z.\end{aligned}}} Frames of reference can be divided into two groups: inertial (relative motion with constant velocity) and non-inertial (accelerating, moving in curved paths, rotational motion with constant angular velocity, etc.). The term "Lorentz transformations" only refers to transformations between inertial frames, usually in the context of special relativity. In each reference frame, an observer can use a local coordinate system (usually Cartesian coordinates in this context) to measure lengths, and a clock to measure time intervals. An event is something that happens at a point in space at an instant of time, or more formally a point in spacetime. The transformations connect the space and time coordinates of an event as measured by an observer in each frame.[nb 1] They supersede the Galilean transformation of Newtonian physics, which assumes an absolute space and time (see Galilean relativity). The Galilean transformation is a good approximation only at relative speeds much less than the speed of light. Lorentz transformations have a number of unintuitive features that do not appear in Galilean transformations. For example, they reflect the fact that observers moving at different velocities may measure different distances, elapsed times, and even different orderings of events, but always such that the speed of light is the same in all inertial reference frames. The invariance of light speed is one of the postulates of special relativity. Historically, the transformations were the result of attempts by Lorentz and others to explain how the speed of light was observed to be independent of the reference frame, and to understand the symmetries of the laws of electromagnetism. The transformations later became a cornerstone for special relativity. The Lorentz transformation is a linear transformation. It may include a rotation of space; a rotation-free Lorentz transformation is called a Lorentz boost. In Minkowski space—the mathematical model of spacetime in special relativity—the Lorentz transformations preserve the spacetime interval between any two events. They describe only the transformations in which the spacetime event at the origin is left fixed. They can be considered as a hyperbolic rotation of Minkowski space. The more general set of transformations that also includes translations is known as the Poincaré group. History Many physicists—including Woldemar Voigt, George FitzGerald, Joseph Larmor, and Hendrik Lorentz himself—had been discussing the physics implied by these equations since 1887. Early in 1889, Oliver Heaviside had shown from Maxwell's equations that the electric field surrounding a spherical distribution of charge should cease to have spherical symmetry once the charge is in motion relative to the luminiferous aether. FitzGerald then conjectured that Heaviside's distortion result might be applied to a theory of intermolecular forces. Some months later, FitzGerald published the conjecture that bodies in motion are being contracted, in order to explain the baffling outcome of the 1887 aether-wind experiment of Michelson and Morley. In 1892, Lorentz independently presented the same idea in a more detailed manner, which was subsequently called FitzGerald–Lorentz contraction hypothesis. Their explanation was widely known before 1905. Lorentz (1892–1904) and Larmor (1897–1900), who believed the luminiferous aether hypothesis, also looked for the transformation under which Maxwell's equations are invariant when transformed from the aether to a moving frame. They extended the FitzGerald–Lorentz contraction hypothesis and found out that the time coordinate has to be modified as well ("local time"). Henri Poincaré gave a physical interpretation to local time (to first order in v/c, the relative velocity of the two reference frames normalized to the speed of light) as the consequence of clock synchronization, under the assumption that the speed of light is constant in moving frames. Larmor is credited to have been the first to understand the crucial time dilation property inherent in his equations. In 1905, Poincaré was the first to recognize that the transformation has the properties of a mathematical group, and he named it after Lorentz. Later in the same year Albert Einstein published what is now called special relativity, by deriving the Lorentz transformation under the assumptions of the principle of relativity and the constancy of the speed of light in any inertial reference frame, and by abandoning the mechanistic aether as unnecessary. Derivation of the group of Lorentz transformations An event is something that happens at a certain point in spacetime, or more generally, the point in spacetime itself. In any inertial frame an event is specified by a time coordinate ct and a set of Cartesian coordinates x, y, z to specify position in space in that frame. Subscripts label individual events. From Einstein's second postulate of relativity (invariance of c) it follows that: in all inertial frames for events connected by light signals. The quantity on the left is called the spacetime interval between events a1 = (t1, x1, y1, z1) and a2 = (t2, x2, y2, z2). The interval between any two events, not necessarily separated by light signals, is in fact invariant, i.e., independent of the state of relative motion of observers in different inertial frames, as is shown using homogeneity and isotropy of space. The transformation sought after thus must possess the property that: where (t, x, y, z) are the spacetime coordinates used to define events in one frame, and (t′, x′, y′, z′) are the coordinates in another frame. First one observes that (D2) is satisfied if an arbitrary 4-tuple b of numbers are added to events a1 and a2. Such transformations are called spacetime translations and are not dealt with further here. Then one observes that a linear solution preserving the origin of the simpler problem solves the general problem too: (a solution satisfying the first formula automatically satisfies the second one as well; see Polarization identity). Finding the solution to the simpler problem is just a matter of look-up in the theory of classical groups that preserve bilinear forms of various signature.[nb 2] First equation in (D3) can be written more compactly as: where (·, ·) refers to the bilinear form of signature (1, 3) on R4 exposed by the right hand side formula in (D3). The alternative notation defined on the right is referred to as the relativistic dot product. Spacetime mathematically viewed as R4 endowed with this bilinear form is known as Minkowski space M. The Lorentz transformation is thus an element of the group O(1, 3), the Lorentz group or, for those that prefer the other metric signature, O(3, 1) (also called the Lorentz group).[nb 3] One has: which is precisely preservation of the bilinear form (D3) which implies (by linearity of Λ and bilinearity of the form) that (D2) is satisfied. The elements of the Lorentz group are rotations and boosts and mixes thereof. If the spacetime translations are included, then one obtains the inhomogeneous Lorentz group or the Poincaré group. Generalities The relations between the primed and unprimed spacetime coordinates are the Lorentz transformations, each coordinate in one frame is a linear function of all the coordinates in the other frame, and the inverse functions are the inverse transformation. Depending on how the frames move relative to each other, and how they are oriented in space relative to each other, other parameters that describe direction, speed, and orientation enter the transformation equations. Transformations describing relative motion with constant (uniform) velocity and without rotation of the space coordinate axes are called Lorentz boosts or simply boosts, and the relative velocity between the frames is the parameter of the transformation. The other basic type of Lorentz transformation is only a rotation in the spatial coordinates. Unlike boosts, these are inertial transformations since there is no relative motion, the frames are simply tilted (and not continuously rotating), and in this case quantities defining the rotation are the parameters of the transformation (e.g., axis–angle representation, or Euler angles, etc.). A combination of a rotation and a boost is a homogeneous transformation, which transforms the origin back to the origin. The full Lorentz group O(3, 1) also contains special transformations that are neither rotations nor boosts, but rather reflections in a plane through the origin. Two of these can be singled out; spatial inversion in which the spatial coordinates of all events are reversed in sign and temporal inversion in which the time coordinate for each event gets its sign reversed. Boosts should not be conflated with mere displacements in spacetime; in this case, the coordinate systems are simply shifted and there is no relative motion. However, these also count as symmetries forced by special relativity since they leave the spacetime interval invariant. A combination of a rotation with a boost, followed by a shift in spacetime, is an inhomogeneous Lorentz transformation, an element of the Poincaré group, which is also called the inhomogeneous Lorentz group. Physical formulation of Lorentz boosts A "stationary" observer in frame F defines events with coordinates t, x, y, z. Another frame F′ moves with velocity v relative to F, and an observer in this "moving" frame F′ defines events using the coordinates t′, x′, y′, z′. The coordinate axes in each frame are parallel (the x and x′ axes are parallel, the y and y′ axes are parallel, and the z and z′ axes are parallel), remain mutually perpendicular, and relative motion is along the coincident xx′ axes. At t = t′ = 0, the origins of both coordinate systems are the same, (x, y, z) = (x′, y′, z′) = (0, 0, 0). In other words, the times and positions are coincident at this event. If all these hold, then the coordinate systems are said to be in standard configuration, or synchronized. If an observer in F records an event t, x, y, z, then an observer in F′ records the same event with coordinates t ′ = γ ( t − v x c 2 ) x ′ = γ ( x − v t ) y ′ = y z ′ = z {\displaystyle {\begin{aligned}t'&=\gamma \left(t-{\frac {vx}{c^{2}}}\right)\\x'&=\gamma \left(x-vt\right)\\y'&=y\\z'&=z\end{aligned}}} where v is the relative velocity between frames in the x-direction, c is the speed of light, and γ = 1 1 − v 2 c 2 {\displaystyle \gamma ={\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}} (lowercase gamma) is the Lorentz factor. Here, v is the parameter of the transformation, for a given boost it is a constant number, but can take a continuous range of values. In the setup used here, positive relative velocity v > 0 is motion along the positive directions of the xx′ axes, zero relative velocity v = 0 is no relative motion, while negative relative velocity v < 0 is relative motion along the negative directions of the xx′ axes. The magnitude of relative velocity v cannot equal or exceed c, so only subluminal speeds −c < v < c are allowed. The corresponding range of γ is 1 ≤ γ < ∞. The transformations are not defined if v is outside these limits. At the speed of light (v = c) γ is infinite, and faster than light (v > c) γ is a complex number, each of which make the transformations unphysical. The space and time coordinates are measurable quantities and numerically must be real numbers. As an active transformation, an observer in F′ notices the coordinates of the event to be "boosted" in the negative directions of the xx′ axes, because of the −v in the transformations. This has the equivalent effect of the coordinate system F′ boosted in the positive directions of the xx′ axes, while the event does not change and is simply represented in another coordinate system, a passive transformation. The inverse relations (t, x, y, z in terms of t′, x′, y′, z′) can be found by algebraically solving the original set of equations. A more efficient way is to use physical principles. Here F′ is the "stationary" frame while F is the "moving" frame. According to the principle of relativity, there is no privileged frame of reference, so the transformations from F′ to F must take exactly the same form as the transformations from F to F′. The only difference is F moves with velocity −v relative to F′ (i.e., the relative velocity has the same magnitude but is oppositely directed). Thus if an observer in F′ notes an event t′, x′, y′, z′, then an observer in F notes the same event with coordinates t = γ ( t ′ + v x ′ c 2 ) x = γ ( x ′ + v t ′ ) y = y ′ z = z ′ , {\displaystyle {\begin{aligned}t&=\gamma \left(t'+{\frac {vx'}{c^{2}}}\right)\\x&=\gamma \left(x'+vt'\right)\\y&=y'\\z&=z',\end{aligned}}} and the value of γ remains unchanged. This "trick" of simply reversing the direction of relative velocity while preserving its magnitude, and exchanging primed and unprimed variables, always applies to finding the inverse transformation of every boost in any direction. Sometimes it is more convenient to use β = v/c (lowercase beta) instead of v, so that c t ′ = γ ( c t − β x ) , x ′ = γ ( x − β c t ) , {\displaystyle {\begin{aligned}ct'&=\gamma \left(ct-\beta x\right)\,,\\x'&=\gamma \left(x-\beta ct\right)\,,\\\end{aligned}}} which shows much more clearly the symmetry in the transformation. From the allowed ranges of v and the definition of β, it follows −1 < β < 1. The use of β and γ is standard throughout the literature. In the case of three spatial dimensions [ct, x, y, z], where the boost β {\displaystyle \beta } is in the x direction, the eigenstates of the transformation are [1, 1, 0, 0] with eigenvalue ( 1 − β ) / ( 1 + β ) {\displaystyle {\sqrt {(1-\beta )/(1+\beta )}}} , [1, −1, 0, 0] with eigenvalue ( 1 + β ) / ( 1 − β ) {\displaystyle {\sqrt {(1+\beta )/(1-\beta )}}} , and [0, 0, 1, 0] and [0, 0, 0, 1], the latter two with eigenvalue 1. When the boost velocity v {\displaystyle {\boldsymbol {v}}} is in an arbitrary vector direction with the boost vector β = v / c {\displaystyle {\boldsymbol {\beta }}={\boldsymbol {v}}/c} , then the transformation from an unprimed spacetime coordinate system to a primed coordinate system is given by [ c t ′ − γ β x x ′ 1 + γ 2 1 + γ β x 2 y ′ γ 2 1 + γ β x β y z ′ γ 2 1 + γ β y β z ] = [ γ − γ β x − γ β y − γ β z − γ β x 1 + γ 2 1 + γ β x 2 γ 2 1 + γ β x β y γ 2 1 + γ β x β z − γ β y γ 2 1 + γ β x β y 1 + γ 2 1 + γ β y 2 γ 2 1 + γ β y β z − γ β z γ 2 1 + γ β x β z γ 2 1 + γ β y β z 1 + γ 2 1 + γ β z 2 ] [ c t − γ β x x 1 + γ 2 1 + γ β x 2 y γ 2 1 + γ β x β y z γ 2 1 + γ β y β z ] , {\displaystyle {\begin{bmatrix}ct'{\vphantom {-\gamma \beta _{\text{x}}}}\\x'{\vphantom {1+{\frac {\gamma ^{2}}{1+\gamma }}\beta _{\text{x}}^{2}}}\\y'{\vphantom {{\frac {\gamma ^{2}}{1+\gamma }}\beta _{\text{x}}\beta _{\text{y}}}}\\z'{\vphantom {{\frac {\gamma ^{2}}{1+\gamma }}\beta _{\text{y}}\beta _{\text{z}}}}\end{bmatrix}}={\begin{bmatrix}\gamma &-\gamma \beta _{\text{x}}&-\gamma \beta _{\text{y}}&-\gamma \beta _{\text{z}}\\-\gamma \beta _{\text{x}}&1+{\frac {\gamma ^{2}}{1+\gamma }}\beta _{\text{x}}^{2}&{\frac {\gamma ^{2}}{1+\gamma }}\beta _{\text{x}}\beta _{\text{y}}&{\frac {\gamma ^{2}}{1+\gamma }}\beta _{\text{x}}\beta _{\text{z}}\\-\gamma \beta _{\text{y}}&{\frac {\gamma ^{2}}{1+\gamma }}\beta _{\text{x}}\beta _{\text{y}}&1+{\frac {\gamma ^{2}}{1+\gamma }}\beta _{\text{y}}^{2}&{\frac {\gamma ^{2}}{1+\gamma }}\beta _{\text{y}}\beta _{\text{z}}\\-\gamma \beta _{\text{z}}&{\frac {\gamma ^{2}}{1+\gamma }}\beta _{\text{x}}\beta _{\text{z}}&{\frac {\gamma ^{2}}{1+\gamma }}\beta _{\text{y}}\beta _{\text{z}}&1+{\frac {\gamma ^{2}}{1+\gamma }}\beta _{\text{z}}^{2}\\\end{bmatrix}}{\begin{bmatrix}ct{\vphantom {-\gamma \beta _{\text{x}}}}\\x{\vphantom {1+{\frac {\gamma ^{2}}{1+\gamma }}\beta _{\text{x}}^{2}}}\\y{\vphantom {{\frac {\gamma ^{2}}{1+\gamma }}\beta _{\text{x}}\beta _{\text{y}}}}\\z{\vphantom {{\frac {\gamma ^{2}}{1+\gamma }}\beta _{\text{y}}\beta _{\text{z}}}}\end{bmatrix}},} where the Lorentz factor is γ = 1 / 1 − β 2 {\displaystyle \gamma =1/{\sqrt {1-{\boldsymbol {\beta }}^{2}}}} . The determinant of the transformation matrix is +1 and its trace is 2 ( 1 + γ ) {\displaystyle 2(1+\gamma )} . The inverse of the transformation is given by reversing the sign of β {\displaystyle {\boldsymbol {\beta }}} . The quantity c 2 t 2 − x 2 − y 2 − z 2 {\displaystyle c^{2}t^{2}-x^{2}-y^{2}-z^{2}} is invariant under the transformation: namely ( c t ′ 2 − x ′ 2 − y ′ 2 − z ′ 2 ) = ( c t 2 − x 2 − y 2 − z 2 ) {\displaystyle (ct'^{2}-x'^{2}-y'^{2}-z'^{2})=(ct^{2}-x^{2}-y^{2}-z^{2})} . The Lorentz transformations can also be derived in a way that resembles circular rotations in 3-dimensional space using the hyperbolic functions. For the boost in the x direction, the results are c t ′ = c t cosh ⁡ ζ − x sinh ⁡ ζ x ′ = x cosh ⁡ ζ − c t sinh ⁡ ζ y ′ = y z ′ = z {\displaystyle {\begin{aligned}ct'&=ct\cosh \zeta -x\sinh \zeta \\x'&=x\cosh \zeta -ct\sinh \zeta \\y'&=y\\z'&=z\end{aligned}}} where ζ (lowercase zeta) is a parameter called rapidity (many other symbols are used, including θ, ϕ, φ, η, ψ, ξ). Given the strong resemblance to rotations of spatial coordinates in 3-dimensional space in the Cartesian xy, yz, and zx planes, a Lorentz boost can be thought of as a hyperbolic rotation of spacetime coordinates in the xt, yt, and zt Cartesian-time planes of 4-dimensional Minkowski space. The parameter ζ is the hyperbolic angle of rotation, analogous to the ordinary angle for circular rotations. This transformation can be illustrated with a Minkowski diagram. The hyperbolic functions arise from the difference between the squares of the time and spatial coordinates in the spacetime interval, rather than a sum. The geometric significance of the hyperbolic functions can be visualized by taking x = 0 or ct = 0 in the transformations. Squaring and subtracting the results, one can derive hyperbolic curves of constant coordinate values but varying ζ, which parametrizes the curves according to the identity cosh 2 ⁡ ζ − sinh 2 ⁡ ζ = 1 . {\displaystyle \cosh ^{2}\zeta -\sinh ^{2}\zeta =1\,.} Conversely the ct and x axes can be constructed for varying coordinates but constant ζ. The definition tanh ⁡ ζ = sinh ⁡ ζ cosh ⁡ ζ , {\displaystyle \tanh \zeta ={\frac {\sinh \zeta }{\cosh \zeta }}\,,} provides the link between a constant value of rapidity, and the slope of the ct axis in spacetime. A consequence these two hyperbolic formulae is an identity that matches the Lorentz factor cosh ⁡ ζ = 1 1 − tanh 2 ⁡ ζ . {\displaystyle \cosh \zeta ={\frac {1}{\sqrt {1-\tanh ^{2}\zeta }}}\,.} Comparing the Lorentz transformations in terms of the relative velocity and rapidity, or using the above formulae, the connections between β, γ, and ζ are β = tanh ⁡ ζ , γ = cosh ⁡ ζ , β γ = sinh ⁡ ζ . {\displaystyle {\begin{aligned}\beta &=\tanh \zeta \,,\\\gamma &=\cosh \zeta \,,\\\beta \gamma &=\sinh \zeta \,.\end{aligned}}} Taking the inverse hyperbolic tangent gives the rapidity ζ = tanh − 1 ⁡ β . {\displaystyle \zeta =\tanh ^{-1}\beta \,.} Since −1 < β < 1, it follows −∞ < ζ < ∞. From the relation between ζ and β, positive rapidity ζ > 0 is motion along the positive directions of the xx′ axes, zero rapidity ζ = 0 is no relative motion, while negative rapidity ζ < 0 is relative motion along the negative directions of the xx′ axes. The inverse transformations are obtained by exchanging primed and unprimed quantities to switch the coordinate frames, and negating rapidity ζ → −ζ since this is equivalent to negating the relative velocity. Therefore, c t = c t ′ cosh ⁡ ζ + x ′ sinh ⁡ ζ x = x ′ cosh ⁡ ζ + c t ′ sinh ⁡ ζ y = y ′ z = z ′ {\displaystyle {\begin{aligned}ct&=ct'\cosh \zeta +x'\sinh \zeta \\x&=x'\cosh \zeta +ct'\sinh \zeta \\y&=y'\\z&=z'\end{aligned}}} The inverse transformations can be similarly visualized by considering the cases when x′ = 0 and ct′ = 0. So far the Lorentz transformations have been applied to one event. If there are two events, there is a spatial separation and time interval between them. It follows from the linearity of the Lorentz transformations that two values of space and time coordinates can be chosen, the Lorentz transformations can be applied to each, then subtracted to get the Lorentz transformations of the differences: Δ t ′ = γ ( Δ t − v Δ x c 2 ) , Δ x ′ = γ ( Δ x − v Δ t ) , {\displaystyle {\begin{aligned}\Delta t'&=\gamma \left(\Delta t-{\frac {v\,\Delta x}{c^{2}}}\right)\,,\\\Delta x'&=\gamma \left(\Delta x-v\,\Delta t\right)\,,\end{aligned}}} with inverse relations Δ t = γ ( Δ t ′ + v Δ x ′ c 2 ) , Δ x = γ ( Δ x ′ + v Δ t ′ ) . {\displaystyle {\begin{aligned}\Delta t&=\gamma \left(\Delta t'+{\frac {v\,\Delta x'}{c^{2}}}\right)\,,\\\Delta x&=\gamma \left(\Delta x'+v\,\Delta t'\right)\,.\end{aligned}}} where Δ (uppercase delta) indicates a difference of quantities; e.g., Δx = x2 − x1 for two values of x coordinates, and so on. These transformations on differences rather than spatial points or instants of time are useful for a number of reasons: A critical requirement of the Lorentz transformations is the invariance of the speed of light, a fact used in their derivation, and contained in the transformations themselves. If in F the equation for a pulse of light along the x direction is x = ct, then in F′ the Lorentz transformations give x′ = ct′, and vice versa, for any −c < v < c. For relative speeds much less than the speed of light, the Lorentz transformations reduce to the Galilean transformation: t ′ ≈ t x ′ ≈ x − v t {\displaystyle {\begin{aligned}t'&\approx t\\x'&\approx x-vt\end{aligned}}} in accordance with the correspondence principle. It is sometimes said that nonrelativistic physics is a physics of "instantaneous action at a distance". Three counterintuitive, but correct, predictions of the transformations are: The use of vectors allows positions and velocities to be expressed in arbitrary directions compactly. A single boost in any direction depends on the full relative velocity vector v with a magnitude |v| = v that cannot equal or exceed c, so that 0 ≤ v < c. Only time and the coordinates parallel to the direction of relative motion change, while those coordinates perpendicular do not. With this in mind, split the spatial position vector r as measured in F, and r′ as measured in F′, each into components perpendicular (⊥) and parallel ( || ) to v, r = r ⊥ + r ‖ , r ′ = r ⊥ ′ + r ‖ ′ , {\displaystyle \mathbf {r} =\mathbf {r} _{\perp }+\mathbf {r} _{\|}\,,\quad \mathbf {r} '=\mathbf {r} _{\perp }'+\mathbf {r} _{\|}'\,,} then the transformations are t ′ = γ ( t − r ∥ ⋅ v c 2 ) r ‖ ′ = γ ( r ‖ − v t ) r ⊥ ′ = r ⊥ {\displaystyle {\begin{aligned}t'&=\gamma \left(t-{\frac {\mathbf {r} _{\parallel }\cdot \mathbf {v} }{c^{2}}}\right)\\\mathbf {r} _{\|}'&=\gamma (\mathbf {r} _{\|}-\mathbf {v} t)\\\mathbf {r} _{\perp }'&=\mathbf {r} _{\perp }\end{aligned}}} where · is the dot product. The Lorentz factor γ retains its definition for a boost in any direction, since it depends only on the magnitude of the relative velocity. The definition β = v/c with magnitude 0 ≤ β < 1 is also used by some authors. Introducing a unit vector n = v/v = β/β in the direction of relative motion, the relative velocity is v = vn with magnitude v and direction n, and vector projection and rejection give respectively r ∥ = ( r ⋅ n ) n , r ⊥ = r − ( r ⋅ n ) n {\displaystyle \mathbf {r} _{\parallel }=(\mathbf {r} \cdot \mathbf {n} )\mathbf {n} \,,\quad \mathbf {r} _{\perp }=\mathbf {r} -(\mathbf {r} \cdot \mathbf {n} )\mathbf {n} } Accumulating the results gives the full transformations, t ′ = γ ( t − v n ⋅ r c 2 ) , r ′ = r + ( γ − 1 ) ( r ⋅ n ) n − γ t v n . {\displaystyle {\begin{aligned}t'&=\gamma \left(t-{\frac {v\mathbf {n} \cdot \mathbf {r} }{c^{2}}}\right)\,,\\\mathbf {r} '&=\mathbf {r} +(\gamma -1)(\mathbf {r} \cdot \mathbf {n} )\mathbf {n} -\gamma tv\mathbf {n} \,.\end{aligned}}} The projection and rejection also applies to r′. For the inverse transformations, exchange r and r′ to switch observed coordinates, and negate the relative velocity v → −v (or simply the unit vector n → −n since the magnitude v is always positive) to obtain t = γ ( t ′ + r ′ ⋅ v n c 2 ) , r = r ′ + ( γ − 1 ) ( r ′ ⋅ n ) n + γ t ′ v n , {\displaystyle {\begin{aligned}t&=\gamma \left(t'+{\frac {\mathbf {r} '\cdot v\mathbf {n} }{c^{2}}}\right)\,,\\\mathbf {r} &=\mathbf {r} '+(\gamma -1)(\mathbf {r} '\cdot \mathbf {n} )\mathbf {n} +\gamma t'v\mathbf {n} \,,\end{aligned}}} The unit vector has the advantage of simplifying equations for a single boost, allows either v or β to be reinstated when convenient, and the rapidity parametrization is immediately obtained by replacing β and βγ. It is not convenient for multiple boosts. The vectorial relation between relative velocity and rapidity is β = β n = n tanh ⁡ ζ , {\displaystyle {\boldsymbol {\beta }}=\beta \mathbf {n} =\mathbf {n} \tanh \zeta \,,} and the "rapidity vector" can be defined as ζ = ζ n = n tanh − 1 ⁡ β , {\displaystyle {\boldsymbol {\zeta }}=\zeta \mathbf {n} =\mathbf {n} \tanh ^{-1}\beta \,,} each of which serves as a useful abbreviation in some contexts. The magnitude of ζ is the absolute value of the rapidity scalar confined to 0 ≤ ζ < ∞, which agrees with the range 0 ≤ β < 1. Defining the coordinate velocities and Lorentz factor by u = d r d t , u ′ = d r ′ d t ′ , γ v = 1 1 − v ⋅ v c 2 {\displaystyle \mathbf {u} ={\frac {d\mathbf {r} }{dt}}\,,\quad \mathbf {u} '={\frac {d\mathbf {r} '}{dt'}}\,,\quad \gamma _{\mathbf {v} }={\frac {1}{\sqrt {1-{\dfrac {\mathbf {v} \cdot \mathbf {v} }{c^{2}}}}}}} taking the differentials in the coordinates and time of the vector transformations, then dividing equations, leads to u ′ = 1 1 − v ⋅ u c 2 [ u γ v − v + 1 c 2 γ v γ v + 1 ( u ⋅ v ) v ] {\displaystyle \mathbf {u} '={\frac {1}{1-{\frac {\mathbf {v} \cdot \mathbf {u} }{c^{2}}}}}\left[{\frac {\mathbf {u} }{\gamma _{\mathbf {v} }}}-\mathbf {v} +{\frac {1}{c^{2}}}{\frac {\gamma _{\mathbf {v} }}{\gamma _{\mathbf {v} }+1}}\left(\mathbf {u} \cdot \mathbf {v} \right)\mathbf {v} \right]} The velocities u and u′ are the velocity of some massive object. They can also be for a third inertial frame (say F′′), in which case they must be constant. Denote either entity by X. Then X moves with velocity u relative to F, or equivalently with velocity u′ relative to F′, in turn F′ moves with velocity v relative to F. The inverse transformations can be obtained in a similar way, or as with position coordinates exchange u and u′, and change v to −v. The transformation of velocity is useful in stellar aberration, the Fizeau experiment, and the relativistic Doppler effect. The Lorentz transformations of acceleration can be similarly obtained by taking differentials in the velocity vectors, and dividing these by the time differential. In general, given four quantities A and Z = (Zx, Zy, Zz) and their Lorentz-boosted counterparts A′ and Z′ = (Z′x, Z′y, Z′z), a relation of the form A 2 − Z ⋅ Z = A ′ 2 − Z ′ ⋅ Z ′ {\displaystyle A^{2}-\mathbf {Z} \cdot \mathbf {Z} ={A'}^{2}-\mathbf {Z} '\cdot \mathbf {Z} '} implies the quantities transform under Lorentz transformations similar to the transformation of spacetime coordinates; A ′ = γ ( A − v n ⋅ Z c ) , Z ′ = Z + ( γ − 1 ) ( Z ⋅ n ) n − γ A v n c . {\displaystyle {\begin{aligned}A'&=\gamma \left(A-{\frac {v\mathbf {n} \cdot \mathbf {Z} }{c}}\right)\,,\\\mathbf {Z} '&=\mathbf {Z} +(\gamma -1)(\mathbf {Z} \cdot \mathbf {n} )\mathbf {n} -{\frac {\gamma Av\mathbf {n} }{c}}\,.\end{aligned}}} The decomposition of Z (and Z′) into components perpendicular and parallel to v is exactly the same as for the position vector, as is the process of obtaining the inverse transformations (exchange (A, Z) and (A′, Z′) to switch observed quantities, and reverse the direction of relative motion by the substitution n ↦ −n). The quantities (A, Z) collectively make up a four-vector, where A is the "timelike component", and Z the "spacelike component". Examples of A and Z are the following: For a given object (e.g., particle, fluid, field, material), if A or Z correspond to properties specific to the object like its charge density, mass density, spin, etc., its properties can be fixed in the rest frame of that object. Then the Lorentz transformations give the corresponding properties in a frame moving relative to the object with constant velocity. This breaks some notions taken for granted in non-relativistic physics. For example, the energy E of an object is a scalar in non-relativistic mechanics, but not in relativistic mechanics because energy changes under Lorentz transformations; its value is different for various inertial frames. In the rest frame of an object, it has a rest energy and zero momentum. In a boosted frame its energy is different and it appears to have a momentum. Similarly, in non-relativistic quantum mechanics the spin of a particle is a constant vector, but in relativistic quantum mechanics spin s depends on relative motion. In the rest frame of the particle, the spin pseudovector can be fixed to be its ordinary non-relativistic spin with a zero timelike quantity st, however a boosted observer will perceive a nonzero timelike component and an altered spin. Not all quantities are invariant in the form as shown above, for example orbital angular momentum L does not have a timelike quantity, and neither does the electric field E nor the magnetic field B. The definition of angular momentum is L = r × p, and in a boosted frame the altered angular momentum is L′ = r′ × p′. Applying this definition using the transformations of coordinates and momentum leads to the transformation of angular momentum. It turns out L transforms with another vector quantity N = (E/c2)r − tp related to boosts, see Relativistic angular momentum for details. For the case of the E and B fields, the transformations cannot be obtained as directly using vector algebra. The Lorentz force is the definition of these fields, and in F it is F = q(E + v × B) while in F′ it is F′ = q(E′ + v′ × B′). A method of deriving the EM field transformations in an efficient way which also illustrates the unit of the electromagnetic field uses tensor algebra, given below. Mathematical formulation Throughout, italic non-bold capital letters are 4 × 4 matrices, while non-italic bold letters are 3 × 3 matrices. Writing the coordinates in column vectors and the Minkowski metric η as a square matrix X ′ = [ c t ′ x ′ y ′ z ′ ] , η = [ − 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ] , X = [ c t x y z ] {\displaystyle X'={\begin{bmatrix}c\,t'\\x'\\y'\\z'\end{bmatrix}}\,,\quad \eta ={\begin{bmatrix}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{bmatrix}}\,,\quad X={\begin{bmatrix}c\,t\\x\\y\\z\end{bmatrix}}} the spacetime interval takes the form (superscript T denotes transpose) X ⋅ X = X T η X = X ′ T η X ′ {\displaystyle X\cdot X=X^{\mathrm {T} }\eta X={X'}^{\mathrm {T} }\eta {X'}} and is invariant under a Lorentz transformation X ′ = Λ X {\displaystyle X'=\Lambda X} where Λ is a square matrix which can depend on parameters. The set of all Lorentz transformations Λ {\displaystyle \Lambda } in this article is denoted L {\displaystyle {\mathcal {L}}} . This set together with matrix multiplication forms a group, in this context known as the Lorentz group. Also, the above expression X·X is a quadratic form of signature (3,1) on spacetime, and the group of transformations which leaves this quadratic form invariant is the indefinite orthogonal group O(3,1), a Lie group. In other words, the Lorentz group is O(3,1). As presented in this article, any Lie groups mentioned are matrix Lie groups. In this context the operation of composition amounts to matrix multiplication. From the invariance of the spacetime interval it follows η = Λ T η Λ {\displaystyle \eta =\Lambda ^{\mathrm {T} }\eta \Lambda } and this matrix equation contains the general conditions on the Lorentz transformation to ensure invariance of the spacetime interval. Taking the determinant of the equation using the product rule[nb 4] gives immediately [ det ( Λ ) ] 2 = 1 ⇒ det ( Λ ) = ± 1 {\displaystyle \left[\det(\Lambda )\right]^{2}=1\quad \Rightarrow \quad \det(\Lambda )=\pm 1} Writing the Minkowski metric as a block matrix, and the Lorentz transformation in the most general form, η = [ − 1 0 0 I ] , Λ = [ Γ − a T − b M ] , {\displaystyle \eta ={\begin{bmatrix}-1&0\\0&\mathbf {I} \end{bmatrix}}\,,\quad \Lambda ={\begin{bmatrix}\Gamma &-\mathbf {a} ^{\mathrm {T} }\\-\mathbf {b} &\mathbf {M} \end{bmatrix}}\,,} carrying out the block matrix multiplications obtains general conditions on Γ, a, b, M to ensure relativistic invariance. Not much information can be directly extracted from all the conditions, however one of the results Γ 2 = 1 + b T b {\displaystyle \Gamma ^{2}=1+\mathbf {b} ^{\mathrm {T} }\mathbf {b} } is useful; bTb ≥ 0 always so it follows that Γ 2 ≥ 1 ⇒ Γ ≤ − 1 , Γ ≥ 1 {\displaystyle \Gamma ^{2}\geq 1\quad \Rightarrow \quad \Gamma \leq -1\,,\quad \Gamma \geq 1} The negative inequality may be unexpected, because Γ multiplies the time coordinate and this has an effect on time symmetry. If the positive equality holds, then Γ is the Lorentz factor. The determinant and inequality provide four ways to classify Lorentz Transformations (herein LTs for brevity). Any particular LT has only one determinant sign and only one inequality. There are four sets which include every possible pair given by the intersections ("n"-shaped symbol meaning "and") of these classifying sets. where "+" and "−" indicate the determinant sign, while "↑" for ≥ and "↓" for ≤ denote the inequalities. The full Lorentz group splits into the union ("u"-shaped symbol meaning "or") of four disjoint sets L = L + ↑ ∪ L − ↑ ∪ L + ↓ ∪ L − ↓ {\displaystyle {\mathcal {L}}={\mathcal {L}}_{+}^{\uparrow }\cup {\mathcal {L}}_{-}^{\uparrow }\cup {\mathcal {L}}_{+}^{\downarrow }\cup {\mathcal {L}}_{-}^{\downarrow }} A subgroup of a group must be closed under the same operation of the group (here matrix multiplication). In other words, for two Lorentz transformations Λ and L from a particular subgroup, the composite Lorentz transformations ΛL and LΛ must be in the same subgroup as Λ and L. This is not always the case: the composition of two antichronous Lorentz transformations is orthochronous, and the composition of two improper Lorentz transformations is proper. In other words, while the sets L + ↑ {\displaystyle {\mathcal {L}}_{+}^{\uparrow }} , L + {\displaystyle {\mathcal {L}}_{+}} , L ↑ {\displaystyle {\mathcal {L}}^{\uparrow }} , and L 0 = L + ↑ ∪ L − ↓ {\displaystyle {\mathcal {L}}_{0}={\mathcal {L}}_{+}^{\uparrow }\cup {\mathcal {L}}_{-}^{\downarrow }} all form subgroups, the sets containing improper and/or antichronous transformations without enough proper orthochronous transformations (e.g. L + ↓ {\displaystyle {\mathcal {L}}_{+}^{\downarrow }} , L − ↓ {\displaystyle {\mathcal {L}}_{-}^{\downarrow }} , L − ↑ {\displaystyle {\mathcal {L}}_{-}^{\uparrow }} ) do not form subgroups. If a Lorentz covariant 4-vector is measured in one inertial frame with result X {\displaystyle X} , and the same measurement made in another inertial frame (with the same orientation and origin) gives result X ′ {\displaystyle X'} , the two results will be related by X ′ = B ( v ) X {\displaystyle X'=B(\mathbf {v} )X} where the boost matrix B ( v ) {\displaystyle B(\mathbf {v} )} represents the rotation-free Lorentz transformation between the unprimed and primed frames and v {\displaystyle \mathbf {v} } is the velocity of the primed frame as seen from the unprimed frame. The matrix is given by B ( v ) = [ γ − γ v x / c − γ v y / c − γ v z / c − γ v x / c 1 + ( γ − 1 ) v x 2 v 2 ( γ − 1 ) v x v y v 2 ( γ − 1 ) v x v z v 2 − γ v y / c ( γ − 1 ) v y v x v 2 1 + ( γ − 1 ) v y 2 v 2 ( γ − 1 ) v y v z v 2 − γ v z / c ( γ − 1 ) v z v x v 2 ( γ − 1 ) v z v y v 2 1 + ( γ − 1 ) v z 2 v 2 ] = [ γ − γ β → T − γ β → I + ( γ − 1 ) β → β → T β 2 ] , {\displaystyle B(\mathbf {v} )={\begin{bmatrix}\gamma &-\gamma v_{\text{x}}/c&-\gamma v_{\text{y}}/c&-\gamma v_{\text{z}}/c\\-\gamma v_{\text{x}}/c&1+(\gamma -1){\dfrac {v_{\text{x}}^{2}}{v^{2}}}&(\gamma -1){\dfrac {v_{\text{x}}v_{\text{y}}}{v^{2}}}&(\gamma -1){\dfrac {v_{\text{x}}v_{\text{z}}}{v^{2}}}\\-\gamma v_{\text{y}}/c&(\gamma -1){\dfrac {v_{\text{y}}v_{\text{x}}}{v^{2}}}&1+(\gamma -1){\dfrac {v_{\text{y}}^{2}}{v^{2}}}&(\gamma -1){\dfrac {v_{\text{y}}v_{\text{z}}}{v^{2}}}\\-\gamma v_{\text{z}}/c&(\gamma -1){\dfrac {v_{\text{z}}v_{\text{x}}}{v^{2}}}&(\gamma -1){\dfrac {v_{\text{z}}v_{\text{y}}}{v^{2}}}&1+(\gamma -1){\dfrac {v_{\text{z}}^{2}}{v^{2}}}\end{bmatrix}}={\begin{bmatrix}\gamma &-\gamma {\vec {\beta }}^{\text{T}}\\-\gamma {\vec {\beta }}&I+(\gamma -1){\dfrac {{\vec {\beta }}{\vec {\beta }}^{\text{T}}}{\beta ^{2}}}\end{bmatrix}},} where v = v x 2 + v y 2 + v z 2 {\textstyle v={\sqrt {v_{\text{x}}^{2}+v_{\text{y}}^{2}+v_{\text{z}}^{2}}}} is the magnitude of the velocity and γ = 1 1 − v 2 / c 2 {\textstyle \gamma ={\frac {1}{\sqrt {1-{v^{2}}/{c^{2}}}}}} is the Lorentz factor. This formula represents a passive transformation, as it describes how the coordinates of the measured quantity changes from the unprimed frame to the primed frame. The active transformation is given by B ( − v ) {\displaystyle B(-\mathbf {v} )} . If a frame F′ is boosted with velocity u relative to frame F, and another frame F′′ is boosted with velocity v relative to F′, the separate boosts are X ″ = B ( v ) X ′ , X ′ = B ( u ) X {\displaystyle X''=B(\mathbf {v} )X'\,,\quad X'=B(\mathbf {u} )X} and the composition of the two boosts connects the coordinates in F′′ and F, X ″ = B ( v ) B ( u ) X . {\displaystyle X''=B(\mathbf {v} )B(\mathbf {u} )X\,.} Successive transformations act on the left. If u and v are collinear (parallel or antiparallel along the same line of relative motion), the boost matrices commute: B(v)B(u) = B(u)B(v). This composite transformation happens to be another boost, B(w), where w is collinear with u and v. If u and v are not collinear but in different directions, the situation is considerably more complicated. Lorentz boosts along different directions do not commute: B(v)B(u) and B(u)B(v) are not equal. Although each of these compositions is not a single boost, each composition is still a Lorentz transformation as it preserves the spacetime interval. It turns out the composition of any two Lorentz boosts is equivalent to a boost followed or preceded by a rotation on the spatial coordinates, in the form of R(ρ)B(w) or B(w)R(ρ). The w and w are composite velocities, while ρ and ρ are rotation parameters (e.g. axis-angle variables, Euler angles, etc.). The rotation in block matrix form is simply R ( ρ ) = [ 1 0 0 R ( ρ ) ] , {\displaystyle \quad R({\boldsymbol {\rho }})={\begin{bmatrix}1&0\\0&\mathbf {R} ({\boldsymbol {\rho }})\end{bmatrix}}\,,} where R(ρ) is a 3 × 3 rotation matrix, which rotates any 3-dimensional vector in one sense (active transformation), or equivalently the coordinate frame in the opposite sense (passive transformation). It is not simple to connect w and ρ (or w and ρ) to the original boost parameters u and v. In a composition of boosts, the R matrix is named the Wigner rotation, and gives rise to the Thomas precession. These articles give the explicit formulae for the composite transformation matrices, including expressions for w, ρ, w, ρ. In this article the axis-angle representation is used for ρ. The rotation is about an axis in the direction of a unit vector e, through angle θ (positive anticlockwise, negative clockwise, according to the right-hand rule). The "axis-angle vector" θ = θ e {\displaystyle {\boldsymbol {\theta }}=\theta \mathbf {e} } will serve as a useful abbreviation. Spatial rotations alone are also Lorentz transformations since they leave the spacetime interval invariant. Like boosts, successive rotations about different axes do not commute. Unlike boosts, the composition of any two rotations is equivalent to a single rotation. Some other similarities and differences between the boost and rotation matrices include: The most general proper Lorentz transformation Λ(v, θ) includes a boost and rotation together, and is a nonsymmetric matrix. As special cases, Λ(0, θ) = R(θ) and Λ(v, 0) = B(v). An explicit form of the general Lorentz transformation is cumbersome to write down and will not be given here. Nevertheless, closed form expressions for the transformation matrices will be given below using group theoretical arguments. It will be easier to use the rapidity parametrization for boosts, in which case one writes Λ(ζ, θ) and B(ζ). The set of transformations { B ( ζ ) , R ( θ ) , Λ ( ζ , θ ) } {\displaystyle \{B({\boldsymbol {\zeta }}),R({\boldsymbol {\theta }}),\Lambda ({\boldsymbol {\zeta }},{\boldsymbol {\theta }})\}} with matrix multiplication as the operation of composition forms a group, called the "restricted Lorentz group", and is the special indefinite orthogonal group SO+(3,1). (The plus sign indicates that it preserves the orientation of the temporal dimension). For simplicity, look at the infinitesimal Lorentz boost in the x direction (examining a boost in any other direction, or rotation about any axis, follows an identical procedure). The infinitesimal boost is a small boost away from the identity, obtained by the Taylor expansion of the boost matrix to first order about ζ = 0, B x = I + ζ ∂ B x ∂ ζ | ζ = 0 + ⋯ {\displaystyle B_{\text{x}}=I+\zeta \left.{\frac {\partial B_{\text{x}}}{\partial \zeta }}\right|_{\zeta =0}+\cdots } where the higher order terms not shown are negligible because ζ is small, and Bx is simply the boost matrix in the x direction. The derivative of the matrix is the matrix of derivatives (of the entries, with respect to the same variable), and it is understood the derivatives are found first then evaluated at ζ = 0, ∂ B x ∂ ζ | ζ = 0 = − K x . {\displaystyle \left.{\frac {\partial B_{\text{x}}}{\partial \zeta }}\right|_{\zeta =0}=-K_{\text{x}}\,.} For now, Kx is defined by this result (its significance will be explained shortly). In the limit of an infinite number of infinitely small steps, the finite boost transformation in the form of a matrix exponential is obtained B x = lim N → ∞ ( I − ζ N K x ) N = e − ζ K x {\displaystyle B_{\text{x}}=\lim _{N\to \infty }\left(I-{\frac {\zeta }{N}}K_{\text{x}}\right)^{\!N}=e^{-\zeta K_{\text{x}}}} where the limit definition of the exponential has been used (see also Characterizations of the exponential function). More generally[nb 5] B ( ζ ) = e − ζ ⋅ K , R ( θ ) = e θ ⋅ J . {\displaystyle B({\boldsymbol {\zeta }})=e^{-{\boldsymbol {\zeta }}\cdot \mathbf {K} }\,,\quad R({\boldsymbol {\theta }})=e^{{\boldsymbol {\theta }}\cdot \mathbf {J} }\,.} The axis-angle vector θ and rapidity vector ζ are altogether six continuous variables which make up the group parameters (in this particular representation), and the generators of the group are K = (Kx, Ky, Kz) and J = (Jx, Jy, Jz), each vectors of matrices with the explicit forms[nb 6] K x = [ 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 ] , K y = [ 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 ] , K z = [ 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 ] J x = [ 0 0 0 0 0 0 0 0 0 0 0 − 1 0 0 1 0 ] , J y = [ 0 0 0 0 0 0 0 1 0 0 0 0 0 − 1 0 0 ] , J z = [ 0 0 0 0 0 0 − 1 0 0 1 0 0 0 0 0 0 ] {\displaystyle {\begin{alignedat}{3}K_{\text{x}}&={\begin{bmatrix}0&1&0&0\\1&0&0&0\\0&0&0&0\\0&0&0&0\\\end{bmatrix}}\,,\quad &K_{\text{y}}&={\begin{bmatrix}0&0&1&0\\0&0&0&0\\1&0&0&0\\0&0&0&0\end{bmatrix}}\,,\quad &K_{\text{z}}&={\begin{bmatrix}0&0&0&1\\0&0&0&0\\0&0&0&0\\1&0&0&0\end{bmatrix}}\\[10mu]J_{\text{x}}&={\begin{bmatrix}0&0&0&0\\0&0&0&0\\0&0&0&-1\\0&0&1&0\\\end{bmatrix}}\,,\quad &J_{\text{y}}&={\begin{bmatrix}0&0&0&0\\0&0&0&1\\0&0&0&0\\0&-1&0&0\end{bmatrix}}\,,\quad &J_{\text{z}}&={\begin{bmatrix}0&0&0&0\\0&0&-1&0\\0&1&0&0\\0&0&0&0\end{bmatrix}}\end{alignedat}}} These are all defined in an analogous way to Kx above, although the minus signs in the boost generators are conventional. Physically, the generators of the Lorentz group correspond to important symmetries in spacetime: J are the rotation generators which correspond to angular momentum, and K are the boost generators which correspond to the motion of the system in spacetime. The derivative of any smooth curve C(t) with C(0) = I in the group depending on some group parameter t with respect to that group parameter, evaluated at t = 0, serves as a definition of a corresponding group generator G, and this reflects an infinitesimal transformation away from the identity. The smooth curve can always be taken as an exponential as the exponential will always map G smoothly back into the group via t → exp(tG) for all t; this curve will yield G again when differentiated at t = 0. Expanding the exponentials in their Taylor series obtains B ( ζ ) = I − sinh ⁡ ζ ( n ⋅ K ) + ( cosh ⁡ ζ − 1 ) ( n ⋅ K ) 2 {\displaystyle B({\boldsymbol {\zeta }})=I-\sinh \zeta (\mathbf {n} \cdot \mathbf {K} )+(\cosh \zeta -1)(\mathbf {n} \cdot \mathbf {K} )^{2}} R ( θ ) = I + sin ⁡ θ ( e ⋅ J ) + ( 1 − cos ⁡ θ ) ( e ⋅ J ) 2 . {\displaystyle R({\boldsymbol {\theta }})=I+\sin \theta (\mathbf {e} \cdot \mathbf {J} )+(1-\cos \theta )(\mathbf {e} \cdot \mathbf {J} )^{2}\,.} which compactly reproduce the boost and rotation matrices as given in the previous section. It has been stated that the general proper Lorentz transformation is a product of a boost and rotation. At the infinitesimal level the product Λ = ( I − ζ ⋅ K + ⋯ ) ( I + θ ⋅ J + ⋯ ) = ( I + θ ⋅ J + ⋯ ) ( I − ζ ⋅ K + ⋯ ) = I − ζ ⋅ K + θ ⋅ J + ⋯ {\displaystyle {\begin{aligned}\Lambda &=(I-{\boldsymbol {\zeta }}\cdot \mathbf {K} +\cdots )(I+{\boldsymbol {\theta }}\cdot \mathbf {J} +\cdots )\\&=(I+{\boldsymbol {\theta }}\cdot \mathbf {J} +\cdots )(I-{\boldsymbol {\zeta }}\cdot \mathbf {K} +\cdots )\\&=I-{\boldsymbol {\zeta }}\cdot \mathbf {K} +{\boldsymbol {\theta }}\cdot \mathbf {J} +\cdots \end{aligned}}} is commutative because only linear terms are required (products like (θ·J)(ζ·K) and (ζ·K)(θ·J) count as higher order terms and are negligible). Taking the limit as before leads to the finite transformation in the form of an exponential Λ ( ζ , θ ) = e − ζ ⋅ K + θ ⋅ J . {\displaystyle \Lambda ({\boldsymbol {\zeta }},{\boldsymbol {\theta }})=e^{-{\boldsymbol {\zeta }}\cdot \mathbf {K} +{\boldsymbol {\theta }}\cdot \mathbf {J} }.} The converse is also true, but the decomposition of a finite general Lorentz transformation into such factors is nontrivial. In particular, e − ζ ⋅ K + θ ⋅ J ≠ e − ζ ⋅ K e θ ⋅ J , {\displaystyle e^{-{\boldsymbol {\zeta }}\cdot \mathbf {K} +{\boldsymbol {\theta }}\cdot \mathbf {J} }\neq e^{-{\boldsymbol {\zeta }}\cdot \mathbf {K} }e^{{\boldsymbol {\theta }}\cdot \mathbf {J} },} because the generators do not commute. For a description of how to find the factors of a general Lorentz transformation in terms of a boost and a rotation in principle (this usually does not yield an intelligible expression in terms of generators J and K), see Wigner rotation. If, on the other hand, the decomposition is given in terms of the generators, and one wants to find the product in terms of the generators, then the Baker–Campbell–Hausdorff formula applies. Lorentz generators can be added together, or multiplied by real numbers, to obtain more Lorentz generators. In other words, the set of all Lorentz generators V = { ζ ⋅ K + θ ⋅ J } {\displaystyle V=\{{\boldsymbol {\zeta }}\cdot \mathbf {K} +{\boldsymbol {\theta }}\cdot \mathbf {J} \}} together with the operations of ordinary matrix addition and multiplication of a matrix by a number, forms a vector space over the real numbers.[nb 7] The generators Jx, Jy, Jz, Kx, Ky, Kz form a basis set of V, and the components of the axis-angle and rapidity vectors, θx, θy, θz, ζx, ζy, ζz, are the coordinates of a Lorentz generator with respect to this basis.[nb 8] Three of the commutation relations of the Lorentz generators are [ J x , J y ] = J z , [ K x , K y ] = − J z , [ J x , K y ] = K z , {\displaystyle [J_{\text{x}},J_{\text{y}}]=J_{\text{z}}\,,\quad [K_{\text{x}},K_{\text{y}}]=-J_{\text{z}}\,,\quad [J_{\text{x}},K_{\text{y}}]=K_{\text{z}}\,,} where the bracket [A, B] = AB − BA is known as the commutator, and the other relations can be found by taking cyclic permutations of x, y, z components (i.e. change x to y, y to z, and z to x, repeat). These commutation relations, and the vector space of generators, fulfill the definition of the Lie algebra s o ( 3 , 1 ) {\displaystyle {\mathfrak {so}}(3,1)} . In summary, a Lie algebra is defined as a vector space V over a field of numbers, and with a binary operation [ , ] (called a Lie bracket in this context) on the elements of the vector space, satisfying the axioms of bilinearity, alternatization, and the Jacobi identity. Here the operation [ , ] is the commutator which satisfies all of these axioms, the vector space is the set of Lorentz generators V as given previously, and the field is the set of real numbers. Linking terminology used in mathematics and physics: A group generator is any element of the Lie algebra. A group parameter is a component of a coordinate vector representing an arbitrary element of the Lie algebra with respect to some basis. A basis, then, is a set of generators being a basis of the Lie algebra in the usual vector space sense. The exponential map from the Lie algebra to the Lie group, exp : s o ( 3 , 1 ) → S O ( 3 , 1 ) , {\displaystyle \exp :{\mathfrak {so}}(3,1)\to \mathrm {SO} (3,1),} provides a one-to-one correspondence between small enough neighborhoods of the origin of the Lie algebra and neighborhoods of the identity element of the Lie group. In the case of the Lorentz group, the exponential map is just the matrix exponential. Globally, the exponential map is not one-to-one, but in the case of the Lorentz group, it is surjective (onto). Hence any group element in the connected component of the identity can be expressed as an exponential of an element of the Lie algebra. Lorentz transformations also include parity inversion P = [ 1 0 0 − I ] {\displaystyle P={\begin{bmatrix}1&0\\0&-\mathbf {I} \end{bmatrix}}} which negates all the spatial coordinates only, and time reversal T = [ − 1 0 0 I ] {\displaystyle T={\begin{bmatrix}-1&0\\0&\mathbf {I} \end{bmatrix}}} which negates the time coordinate only, because these transformations leave the spacetime interval invariant. Here I is the 3 × 3 identity matrix. These are both symmetric, they are their own inverses (see Involution (mathematics)), and each have determinant −1. This latter property makes them improper transformations. If Λ is a proper orthochronous Lorentz transformation, then TΛ is improper antichronous, PΛ is improper orthochronous, and TPΛ = PTΛ is proper antichronous. Two other spacetime symmetries have not been accounted for. In order for the spacetime interval to be invariant, it can be shown that it is necessary and sufficient for the coordinate transformation to be of the form X ′ = Λ X + C {\displaystyle X'=\Lambda X+C} where C is a constant column containing translations in time and space. If C ≠ 0, this is an inhomogeneous Lorentz transformation or Poincaré transformation. If C = 0, this is a homogeneous Lorentz transformation. Poincaré transformations are not dealt further in this article. Tensor formulation Writing the general matrix transformation of coordinates as the matrix equation [ x ′ 0 x ′ 1 x ′ 2 x ′ 3 ] = [ Λ 0 0 Λ 0 1 Λ 0 2 Λ 0 3 x ′ 0 Λ 1 0 Λ 1 1 Λ 1 2 Λ 1 3 x ′ 0 Λ 2 0 Λ 2 1 Λ 2 2 Λ 2 3 x ′ 0 Λ 3 0 Λ 3 1 Λ 3 2 Λ 3 3 x ′ 0 ] [ x 0 x ′ 0 x 1 x ′ 0 x 2 x ′ 0 x 3 x ′ 0 ] {\displaystyle {\begin{bmatrix}{x'}^{0}\\{x'}^{1}\\{x'}^{2}\\{x'}^{3}\end{bmatrix}}={\begin{bmatrix}{\Lambda ^{0}}_{0}&{\Lambda ^{0}}_{1}&{\Lambda ^{0}}_{2}&{\Lambda ^{0}}_{3}{\vphantom {{x'}^{0}}}\\{\Lambda ^{1}}_{0}&{\Lambda ^{1}}_{1}&{\Lambda ^{1}}_{2}&{\Lambda ^{1}}_{3}{\vphantom {{x'}^{0}}}\\{\Lambda ^{2}}_{0}&{\Lambda ^{2}}_{1}&{\Lambda ^{2}}_{2}&{\Lambda ^{2}}_{3}{\vphantom {{x'}^{0}}}\\{\Lambda ^{3}}_{0}&{\Lambda ^{3}}_{1}&{\Lambda ^{3}}_{2}&{\Lambda ^{3}}_{3}{\vphantom {{x'}^{0}}}\\\end{bmatrix}}{\begin{bmatrix}x^{0}{\vphantom {{x'}^{0}}}\\x^{1}{\vphantom {{x'}^{0}}}\\x^{2}{\vphantom {{x'}^{0}}}\\x^{3}{\vphantom {{x'}^{0}}}\end{bmatrix}}} allows the transformation of other physical quantities that cannot be expressed as four-vectors; e.g., tensors or spinors of any order in 4-dimensional spacetime, to be defined. In the corresponding tensor index notation, the above matrix expression is x ′ ν = Λ ν μ x μ , {\displaystyle {x'}^{\nu }={\Lambda ^{\nu }}_{\mu }x^{\mu },} where lower and upper indices label covariant and contravariant components respectively, and the summation convention is applied. It is a standard convention to use Greek indices that take the value 0 for time components, and 1, 2, 3 for space components, while Latin indices simply take the values 1, 2, 3, for spatial components (the opposite for Landau and Lifshitz). Note that the first index (reading left to right) corresponds in the matrix notation to a row index. The second index corresponds to the column index. The transformation matrix is universal for all four-vectors, not just 4-dimensional spacetime coordinates. If A is any four-vector, then in tensor index notation A ′ ν = Λ ν μ A μ . {\displaystyle {A'}^{\nu }={\Lambda ^{\nu }}_{\mu }A^{\mu }\,.} Alternatively, one writes A ν ′ = Λ ν ′ μ A μ . {\displaystyle A^{\nu '}={\Lambda ^{\nu '}}_{\mu }A^{\mu }\,.} in which the primed indices denote the indices of A in the primed frame. For a general n-component object one may write X ′ α = Π ( Λ ) α β X β , {\displaystyle {X'}^{\alpha }={\Pi (\Lambda )^{\alpha }}_{\beta }X^{\beta }\,,} where Π is the appropriate representation of the Lorentz group, an n × n matrix for every Λ. In this case, the indices should not be thought of as spacetime indices (sometimes called Lorentz indices), and they run from 1 to n. E.g., if X is a bispinor, then the indices are called Dirac indices. There are also vector quantities with covariant indices. They are generally obtained from their corresponding objects with contravariant indices by the operation of lowering an index; e.g., x ν = η μ ν x μ , {\displaystyle x_{\nu }=\eta _{\mu \nu }x^{\mu },} where η is the metric tensor. (The linked article also provides more information about what the operation of raising and lowering indices really is mathematically.) The inverse of this transformation is given by x μ = η μ ν x ν , {\displaystyle x^{\mu }=\eta ^{\mu \nu }x_{\nu },} where, when viewed as matrices, ημν is the inverse of ημν. As it happens, ημν = ημν. This is referred to as raising an index. To transform a covariant vector Aμ, first raise its index, then transform it according to the same rule as for contravariant 4-vectors, then finally lower the index; A ′ ν = η ρ ν Λ ρ σ η μ σ A μ . {\displaystyle {A'}_{\nu }=\eta _{\rho \nu }{\Lambda ^{\rho }}_{\sigma }\eta ^{\mu \sigma }A_{\mu }.} But η ρ ν Λ ρ σ η μ σ = ( Λ − 1 ) μ ν , {\displaystyle \eta _{\rho \nu }{\Lambda ^{\rho }}_{\sigma }\eta ^{\mu \sigma }={\left(\Lambda ^{-1}\right)^{\mu }}_{\nu },} That is, it is the (μ, ν)-component of the inverse Lorentz transformation. One defines (as a matter of notation), Λ ν μ ≡ ( Λ − 1 ) μ ν , {\displaystyle {\Lambda _{\nu }}^{\mu }\equiv {\left(\Lambda ^{-1}\right)^{\mu }}_{\nu },} and may in this notation write A ′ ν = Λ ν μ A μ . {\displaystyle {A'}_{\nu }={\Lambda _{\nu }}^{\mu }A_{\mu }.} Now for a subtlety. The implied summation on the right hand side of A ′ ν = Λ ν μ A μ = ( Λ − 1 ) μ ν A μ {\displaystyle {A'}_{\nu }={\Lambda _{\nu }}^{\mu }A_{\mu }={\left(\Lambda ^{-1}\right)^{\mu }}_{\nu }A_{\mu }} is running over a row index of the matrix representing Λ−1. Thus, in terms of matrices, this transformation should be thought of as the inverse transpose of Λ acting on the column vector Aμ. That is, in pure matrix notation, A ′ = ( Λ − 1 ) T A . {\displaystyle A'=\left(\Lambda ^{-1}\right)^{\mathrm {T} }A.} This means exactly that covariant vectors (thought of as column matrices) transform according to the dual representation of the standard representation of the Lorentz group. This notion generalizes to general representations, simply replace Λ with Π(Λ). If A and B are linear operators on vector spaces U and V, then a linear operator A ⊗ B may be defined on the tensor product of U and V, denoted U ⊗ V according to ( A ⊗ B ) ( u ⊗ v ) = A u ⊗ B v , u ∈ U , v ∈ V , u ⊗ v ∈ U ⊗ V . {\displaystyle (A\otimes B)(u\otimes v)=Au\otimes Bv,\qquad u\in U,v\in V,u\otimes v\in U\otimes V.} (T1) From this it is immediately clear that if u and v are a four-vectors in V, then u ⊗ v ∈ T2V ≡ V ⊗ V transforms as u ⊗ v → Λ u ⊗ Λ v = Λ μ ν u ν ⊗ Λ ρ σ v σ = Λ μ ν Λ ρ σ u ν ⊗ v σ ≡ Λ μ ν Λ ρ σ w ν σ . {\displaystyle u\otimes v\rightarrow \Lambda u\otimes \Lambda v={\Lambda ^{\mu }}_{\nu }u^{\nu }\otimes {\Lambda ^{\rho }}_{\sigma }v^{\sigma }={\Lambda ^{\mu }}_{\nu }{\Lambda ^{\rho }}_{\sigma }u^{\nu }\otimes v^{\sigma }\equiv {\Lambda ^{\mu }}_{\nu }{\Lambda ^{\rho }}_{\sigma }w^{\nu \sigma }.} (T2) The second step uses the bilinearity of the tensor product and the last step defines a 2-tensor on component form, or rather, it just renames the tensor u ⊗ v. These observations generalize in an obvious way to more factors, and using the fact that a general tensor on a vector space V can be written as a sum of a coefficient (component!) times tensor products of basis vectors and basis covectors, one arrives at the transformation law for any tensor quantity T. It is given by T θ ′ ι ′ ⋯ κ ′ α ′ β ′ ⋯ ζ ′ = Λ α ′ μ Λ β ′ ν ⋯ Λ ζ ′ ρ Λ θ ′ σ Λ ι ′ υ ⋯ Λ κ ′ ζ T σ υ ⋯ ζ μ ν ⋯ ρ , {\displaystyle T_{\theta '\iota '\cdots \kappa '}^{\alpha '\beta '\cdots \zeta '}={\Lambda ^{\alpha '}}_{\mu }{\Lambda ^{\beta '}}_{\nu }\cdots {\Lambda ^{\zeta '}}_{\rho }{\Lambda _{\theta '}}^{\sigma }{\Lambda _{\iota '}}^{\upsilon }\cdots {\Lambda _{\kappa '}}^{\zeta }T_{\sigma \upsilon \cdots \zeta }^{\mu \nu \cdots \rho },} (T3) where Λχ′ψ is defined above. This form can generally be reduced to the form for general n-component objects given above with a single matrix (Π(Λ)) operating on column vectors. This latter form is sometimes preferred; e.g., for the electromagnetic field tensor. Lorentz transformations can also be used to illustrate that the magnetic field B and electric field E are simply different aspects of the same force — the electromagnetic force, as a consequence of relative motion between electric charges and observers. The fact that the electromagnetic field shows relativistic effects becomes clear by carrying out a simple thought experiment. The electric and magnetic fields transform differently from space and time, but exactly the same way as relativistic angular momentum and the boost vector. The electromagnetic field strength tensor is given by F μ ν = [ 0 − 1 c E x − 1 c E y − 1 c E z 1 c E x 0 − B z B y 1 c E y B z 0 − B x 1 c E z − B y B x 0 ] {\displaystyle F^{\mu \nu }={\begin{bmatrix}0&-{\frac {1}{c}}E_{\text{x}}&-{\frac {1}{c}}E_{\text{y}}&-{\frac {1}{c}}E_{\text{z}}\\{\frac {1}{c}}E_{\text{x}}&0&-B_{\text{z}}&B_{\text{y}}\\{\frac {1}{c}}E_{\text{y}}&B_{\text{z}}&0&-B_{\text{x}}\\{\frac {1}{c}}E_{\text{z}}&-B_{\text{y}}&B_{\text{x}}&0\end{bmatrix}}} in with signature (+, −, −, −). In relativity, the factor c may be absorbed into the tensor components to eliminate its explicit appearance in expressions. Consider a Lorentz boost in the x-direction. It is given by Λ μ ν = [ γ − γ β 0 0 − γ β γ 0 0 0 0 1 0 0 0 0 1 ] , F μ ν = [ 0 E x E y E z − E x 0 B z − B y − E y − B z 0 B x − E z B y − B x 0 ] , {\displaystyle {\Lambda ^{\mu }}_{\nu }={\begin{bmatrix}\gamma &-\gamma \beta &0&0\\-\gamma \beta &\gamma &0&0\\0&0&1&0\\0&0&0&1\\\end{bmatrix}},\qquad F^{\mu \nu }={\begin{bmatrix}0&E_{\text{x}}&E_{\text{y}}&E_{\text{z}}\\-E_{\text{x}}&0&B_{\text{z}}&-B_{\text{y}}\\-E_{\text{y}}&-B_{\text{z}}&0&B_{\text{x}}\\-E_{\text{z}}&B_{\text{y}}&-B_{\text{x}}&0\end{bmatrix}},} where the signature is (−, +, +, +) and the field tensor is displayed side by side for easiest possible reference in the manipulations below. The general transformation law (T3) becomes F μ ′ ν ′ = Λ μ ′ μ Λ ν ′ ν F μ ν . {\displaystyle F^{\mu '\nu '}={\Lambda ^{\mu '}}_{\mu }{\Lambda ^{\nu '}}_{\nu }F^{\mu \nu }.} For the magnetic field one obtains B x ′ = F 2 ′ 3 ′ = Λ 2 μ Λ 3 ν F μ ν = Λ 2 2 Λ 3 3 F 23 = 1 × 1 × B x = B x , B y ′ = F 3 ′ 1 ′ = Λ 3 μ Λ 1 ν F μ ν = Λ 3 3 Λ 1 ν F 3 ν = Λ 3 3 Λ 1 0 F 30 + Λ 3 3 Λ 1 1 F 31 = 1 × ( − β γ ) ( − E z ) + 1 × γ B y = γ B y + β γ E z = γ ( B − β × E ) y B z ′ = F 1 ′ 2 ′ = Λ 1 μ Λ 2 ν F μ ν = Λ 1 μ Λ 2 2 F μ 2 = Λ 1 0 Λ 2 2 F 02 + Λ 1 1 Λ 2 2 F 12 = ( − γ β ) × 1 × E y + γ × 1 × B z = γ B z − β γ E y = γ ( B − β × E ) z {\displaystyle {\begin{aligned}B_{x'}&=F^{2'3'}={\Lambda ^{2}}_{\mu }{\Lambda ^{3}}_{\nu }F^{\mu \nu }={\Lambda ^{2}}_{2}{\Lambda ^{3}}_{3}F^{23}=1\times 1\times B_{\text{x}}\\&=B_{\text{x}},\\B_{y'}&=F^{3'1'}={\Lambda ^{3}}_{\mu }{\Lambda ^{1}}_{\nu }F^{\mu \nu }={\Lambda ^{3}}_{3}{\Lambda ^{1}}_{\nu }F^{3\nu }={\Lambda ^{3}}_{3}{\Lambda ^{1}}_{0}F^{30}+{\Lambda ^{3}}_{3}{\Lambda ^{1}}_{1}F^{31}\\&=1\times (-\beta \gamma )(-E_{\text{z}})+1\times \gamma B_{\text{y}}=\gamma B_{\text{y}}+\beta \gamma E_{\text{z}}\\&=\gamma \left(\mathbf {B} -{\boldsymbol {\beta }}\times \mathbf {E} \right)_{\text{y}}\\B_{z'}&=F^{1'2'}={\Lambda ^{1}}_{\mu }{\Lambda ^{2}}_{\nu }F^{\mu \nu }={\Lambda ^{1}}_{\mu }{\Lambda ^{2}}_{2}F^{\mu 2}={\Lambda ^{1}}_{0}{\Lambda ^{2}}_{2}F^{02}+{\Lambda ^{1}}_{1}{\Lambda ^{2}}_{2}F^{12}\\&=(-\gamma \beta )\times 1\times E_{\text{y}}+\gamma \times 1\times B_{\text{z}}=\gamma B_{\text{z}}-\beta \gamma E_{\text{y}}\\&=\gamma \left(\mathbf {B} -{\boldsymbol {\beta }}\times \mathbf {E} \right)_{\text{z}}\end{aligned}}} For the electric field results E x ′ = F 0 ′ 1 ′ = Λ 0 μ Λ 1 ν F μ ν = Λ 0 1 Λ 1 0 F 10 + Λ 0 0 Λ 1 1 F 01 = ( − γ β ) ( − γ β ) ( − E x ) + γ γ E x = − γ 2 β 2 ( E x ) + γ 2 E x = E x ( 1 − β 2 ) γ 2 = E x , E y ′ = F 0 ′ 2 ′ = Λ 0 μ Λ 2 ν F μ ν = Λ 0 μ Λ 2 2 F μ 2 = Λ 0 0 Λ 2 2 F 02 + Λ 0 1 Λ 2 2 F 12 = γ × 1 × E y + ( − β γ ) × 1 × B z = γ E y − β γ B z = γ ( E + β × B ) y E z ′ = F 0 ′ 3 ′ = Λ 0 μ Λ 3 ν F μ ν = Λ 0 μ Λ 3 3 F μ 3 = Λ 0 0 Λ 3 3 F 03 + Λ 0 1 Λ 3 3 F 13 = γ × 1 × E z − β γ × 1 × ( − B y ) = γ E z + β γ B y = γ ( E + β × B ) z . {\displaystyle {\begin{aligned}E_{x'}&=F^{0'1'}={\Lambda ^{0}}_{\mu }{\Lambda ^{1}}_{\nu }F^{\mu \nu }={\Lambda ^{0}}_{1}{\Lambda ^{1}}_{0}F^{10}+{\Lambda ^{0}}_{0}{\Lambda ^{1}}_{1}F^{01}\\&=(-\gamma \beta )(-\gamma \beta )(-E_{\text{x}})+\gamma \gamma E_{\text{x}}=-\gamma ^{2}\beta ^{2}(E_{\text{x}})+\gamma ^{2}E_{\text{x}}=E_{\text{x}}(1-\beta ^{2})\gamma ^{2}\\&=E_{\text{x}},\\E_{y'}&=F^{0'2'}={\Lambda ^{0}}_{\mu }{\Lambda ^{2}}_{\nu }F^{\mu \nu }={\Lambda ^{0}}_{\mu }{\Lambda ^{2}}_{2}F^{\mu 2}={\Lambda ^{0}}_{0}{\Lambda ^{2}}_{2}F^{02}+{\Lambda ^{0}}_{1}{\Lambda ^{2}}_{2}F^{12}\\&=\gamma \times 1\times E_{\text{y}}+(-\beta \gamma )\times 1\times B_{\text{z}}=\gamma E_{\text{y}}-\beta \gamma B_{\text{z}}\\&=\gamma \left(\mathbf {E} +{\boldsymbol {\beta }}\times \mathbf {B} \right)_{\text{y}}\\E_{z'}&=F^{0'3'}={\Lambda ^{0}}_{\mu }{\Lambda ^{3}}_{\nu }F^{\mu \nu }={\Lambda ^{0}}_{\mu }{\Lambda ^{3}}_{3}F^{\mu 3}={\Lambda ^{0}}_{0}{\Lambda ^{3}}_{3}F^{03}+{\Lambda ^{0}}_{1}{\Lambda ^{3}}_{3}F^{13}\\&=\gamma \times 1\times E_{\text{z}}-\beta \gamma \times 1\times (-B_{\text{y}})=\gamma E_{\text{z}}+\beta \gamma B_{\text{y}}\\&=\gamma \left(\mathbf {E} +{\boldsymbol {\beta }}\times \mathbf {B} \right)_{\text{z}}.\end{aligned}}} Here, β = (β, 0, 0) is used. These results can be summarized by E ∥ ′ = E ∥ B ∥ ′ = B ∥ E ⊥ ′ = γ ( E ⊥ + β × B ⊥ ) = γ ( E + β × B ) ⊥ , B ⊥ ′ = γ ( B ⊥ − β × E ⊥ ) = γ ( B − β × E ) ⊥ , {\displaystyle {\begin{aligned}\mathbf {E} _{\parallel '}&=\mathbf {E} _{\parallel }\\\mathbf {B} _{\parallel '}&=\mathbf {B} _{\parallel }\\\mathbf {E} _{\bot '}&=\gamma \left(\mathbf {E} _{\bot }+{\boldsymbol {\beta }}\times \mathbf {B} _{\bot }\right)=\gamma \left(\mathbf {E} +{\boldsymbol {\beta }}\times \mathbf {B} \right)_{\bot },\\\mathbf {B} _{\bot '}&=\gamma \left(\mathbf {B} _{\bot }-{\boldsymbol {\beta }}\times \mathbf {E} _{\bot }\right)=\gamma \left(\mathbf {B} -{\boldsymbol {\beta }}\times \mathbf {E} \right)_{\bot },\end{aligned}}} and are independent of the metric signature. For SI units, substitute E → E/c. Misner, Thorne & Wheeler (1973) refer to this last form as the 3 + 1 view as opposed to the geometric view represented by the tensor expression F μ ′ ν ′ = Λ μ ′ μ Λ ν ′ ν F μ ν , {\displaystyle F^{\mu '\nu '}={\Lambda ^{\mu '}}_{\mu }{\Lambda ^{\nu '}}_{\nu }F^{\mu \nu },} and make a strong point of the ease with which results that are difficult to achieve using the 3 + 1 view can be obtained and understood. Only objects that have well defined Lorentz transformation properties (in fact under any smooth coordinate transformation) are geometric objects. In the geometric view, the electromagnetic field is a six-dimensional geometric object in spacetime as opposed to two interdependent, but separate, 3-vector fields in space and time. The fields E (alone) and B (alone) do not have well defined Lorentz transformation properties. The mathematical underpinnings are equations (T1) and (T2) that immediately yield (T3). The primed and unprimed tensors refer to the same event in spacetime. Thus the complete equation with spacetime dependence is F μ ′ ν ′ ( x ′ ) = Λ μ ′ μ Λ ν ′ ν F μ ν ( Λ − 1 x ′ ) = Λ μ ′ μ Λ ν ′ ν F μ ν ( x ) . {\displaystyle F^{\mu '\nu '}\left(x'\right)={\Lambda ^{\mu '}}_{\mu }{\Lambda ^{\nu '}}_{\nu }F^{\mu \nu }\left(\Lambda ^{-1}x'\right)={\Lambda ^{\mu '}}_{\mu }{\Lambda ^{\nu '}}_{\nu }F^{\mu \nu }(x).} Length contraction has an effect on charge density ρ and current density J, and time dilation has an effect on the rate of flow of charge (current), so charge and current distributions must transform in a related way under a boost. It turns out they transform exactly like the space-time and energy-momentum four-vectors, j ′ = j − γ ρ v n + ( γ − 1 ) ( j ⋅ n ) n ρ ′ = γ ( ρ − j ⋅ v n c 2 ) , {\displaystyle {\begin{aligned}\mathbf {j} '&=\mathbf {j} -\gamma \rho v\mathbf {n} +\left(\gamma -1\right)(\mathbf {j} \cdot \mathbf {n} )\mathbf {n} \\\rho '&=\gamma \left(\rho -\mathbf {j} \cdot {\frac {v\mathbf {n} }{c^{2}}}\right),\end{aligned}}} or, in the simpler geometric view, j μ ′ = Λ μ ′ μ j μ . {\displaystyle j^{\mu '}={\Lambda ^{\mu '}}_{\mu }j^{\mu }.} Charge density transforms as the time component of a four-vector. It is a rotational scalar. The current density is a 3-vector. The Maxwell equations are invariant under Lorentz transformations. Equation (T1) hold unmodified for any representation of the Lorentz group, including the bispinor representation. In (T2) one simply replaces all occurrences of Λ by the bispinor representation Π(Λ), u ⊗ v → Π ( Λ ) u ⊗ Π ( Λ ) v = Π ( Λ ) α β u β ⊗ Π ( Λ ) ρ σ v σ = Π ( Λ ) α β Π ( Λ ) ρ σ u β ⊗ v σ ≡ Π ( Λ ) α β Π ( Λ ) ρ σ w β σ {\displaystyle {\begin{aligned}u\otimes v\rightarrow \Pi (\Lambda )u\otimes \Pi (\Lambda )v&={\Pi (\Lambda )^{\alpha }}_{\beta }u^{\beta }\otimes {\Pi (\Lambda )^{\rho }}_{\sigma }v^{\sigma }\\&={\Pi (\Lambda )^{\alpha }}_{\beta }{\Pi (\Lambda )^{\rho }}_{\sigma }u^{\beta }\otimes v^{\sigma }\\&\equiv {\Pi (\Lambda )^{\alpha }}_{\beta }{\Pi (\Lambda )^{\rho }}_{\sigma }w^{\beta \sigma }\end{aligned}}} (T4) The above equation could, for instance, be the transformation of a state in Fock space describing two free electrons. A general noninteracting multi-particle state (Fock space state) in quantum field theory transforms according to the rule where W(Λ, p) is the Wigner's little group and D(j) is the (2j + 1)-dimensional representation of SO(3). See also Footnotes Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Arab_Spring] | [TOKENS: 13667]
Contents Arab Spring The Arab Spring (Arabic: الربيع العربي, romanized: ar-rabīʻ al-ʻarabī) was a series of pro-democracy anti-government protests, uprisings, and armed rebellions that spread across much of the Arab world in the early 2010s. It began in Tunisia in response to the death of Mohamed Bouazizi by self-immolation. From Tunisia, the protests initially spread to five other countries: Libya, Egypt, Yemen, Syria and Bahrain. The rulers deposed include: Zine El Abidine Ben Ali of Tunisia, Muammar Gaddafi of Libya, and Hosni Mubarak of Egypt, all in 2011; and Ali Abdullah Saleh of Yemen in 2012.[a] Major uprisings and social violence occurred, including riots, civil wars, or insurgencies. Sustained street demonstrations took place in Morocco, Iraq, Algeria, Lebanon, Jordan, Kuwait, Oman and Sudan. Minor protests took place in Djibouti, Mauritania, Palestine, Saudi Arabia and the Western Sahara. A major slogan of the demonstrators in the Arab world is ash-shaʻb yurīd isqāṭ an-niẓām! (Arabic: الشعب يريد إسقاط النظام, lit. 'the people want to bring down the regime'). The wave of initial revolutions and protests faded by mid to late 2012, as many Arab Spring demonstrations were met with violent responses from authorities, pro-government militias, counterdemonstrators, and militaries. These attacks were answered with violence from protesters in some cases. Multiple large-scale conflicts followed: the Syrian civil war; the rise of ISIS, insurgency in Iraq and the following civil war; the Egyptian Crisis, election and removal from office of Mohamed Morsi, and subsequent unrest and insurgency; the Libyan Crisis; and the Yemeni crisis and subsequent civil war. Regimes that lacked major oil wealth and hereditary succession arrangements were more likely to undergo regime change. A power struggle continued after the immediate response to the Arab Spring. While leadership changed and regimes were held accountable, power vacuums opened across the Arab world. Ultimately, it resulted in a contentious battle between a consolidation of power by religious elites and the growing support for democracy in many Muslim-majority states. The early hopes that these popular movements would end corruption, increase political participation, and bring about greater economic equity quickly collapsed in the wake of the counter-revolutionary moves by foreign state actors in Yemen, the regional and international military interventions in Bahrain and Yemen, and the destructive civil wars in Syria, Iraq, Libya, and Yemen. Some referred to the succeeding and still ongoing conflicts as the Arab Winter. A new wave of protests began in 2018 which led to the resignation of prime ministers Haider al-Abadi of Iraq in 2018 and Saad Hariri of Lebanon in 2020, and the overthrow of presidents Omar al-Bashir of Sudan and Abdelaziz Bouteflika of Algeria in 2019. Sometimes called the "Second Arab Spring", these events showed how the conditions that started the Arab Spring have not faded and political movements against authoritarianism and exploitation remain ongoing. Continued protest movements in Algeria, Sudan, Iraq, Lebanon, Egypt, and Syria have been seen as a continuation of the Arab Spring. As of 2025, multiple conflicts are still continuing which some have considered as originating from the Arab Spring. A major shift in the Syrian civil war occurred in December 2024, when a rebel offensive led to the fall of the Assad regime after over a decade of warfare. In Libya, a major civil war concluded, with foreign powers intervening. In Yemen, a civil war continues to affect the country. Etymology The denomination "Arab Spring" is contested by some scholars and observers claiming that the term is problematic for several reasons. First, it was coined by Western commentators, not those involved in the events. The first specific use of the term Arab Spring as used to denote these events may have started with the US political journal Foreign Policy. Political scientist Marc Lynch described Arab Spring as "a term I may have unintentionally coined in a 6 January 2011 article" for Foreign Policy magazine. Protestors involved in the events however described their own political actions as "uprising" (intifada), Arab "awakening" (sahwa) and Arab "renaissance" (nahda), using expressions like al-marar al-Arabi (the Arab bitterness), karama (dignity) and thawra (revolution). Some authors argue that western governments, scholars and media used the term to minimize people's revolutionary aims and discourse. Joseph Massad on Al Jazeera said the term was "part of a US strategy of controlling the movement's aims and goals" and directing it towards Western-style liberal democracy. When Arab Spring protests in some countries were followed by electoral success for Islamist parties, some American pundits coined the terms Islamist Spring and Islamist Winter. The term "Spring" further illustrates the problematic nature of projecting Western expectations onto non-Western actors and practices. The terminology follows the Western example of the Revolutions of 1848 referred to as "Spring of Nations" and the Prague Spring in 1968, in which a Czech student, Jan Palach, set himself on fire as Mohamed Bouazizi did. In the aftermath of the Iraq War, it was used by various commentators and bloggers who anticipated a major Arab movement towards democratization. The term "Arab Spring" is thus contested as it signifies an expectation that the events would replicate the example of democratic revolutions set by the West. Causes The world watched the events of the Arab Spring unfold, "gripped by the narrative of a young generation peacefully rising up against oppressive authoritarianism to secure a more democratic political system and a brighter economic future". The Arab Spring is widely believed to have been instigated by dissatisfaction, particularly of youth and unions, with the rule of local governments, though some have speculated that wide gaps in income levels and pressures caused by the Great Recession may have had a hand as well. Some activists had taken part in programs sponsored by the US-funded National Endowment for Democracy, but the United States government claimed that they did not initiate the uprisings. Numerous factors led to the protests, including issues such as reform, human rights violations, political corruption, economic decline, unemployment, extreme poverty, and a number of demographic structural factors, such as a large percentage of educated but dissatisfied youth within the entire population. Catalysts for the revolts in all Northern African and Persian Gulf countries included the concentration of wealth in the hands of monarchs in power for decades, insufficient transparency of its redistribution, corruption, and especially the refusal of the youth to accept the status quo. Some protesters looked to the Turkish model as an ideal (contested but peaceful elections, fast-growing but liberal economy, secular constitution but Islamist government). Other analysts blamed the rise in food prices on commodity traders and the conversion of crops to ethanol. Yet others have claimed that the context of high rates of unemployment and corrupt political regimes led to dissent movements within the region. In the wake of the Arab Spring protests, a considerable amount of attention focused on the role of social media and digital technologies in allowing citizens within areas affected by "the Arab Uprisings" as a means for collective activism to circumvent state-operated media channels. The influence of social media on political activism during the Arab Spring has, however, been much debated. Protests took place both in states with a very high level of Internet usage (such as Bahrain with 88% of its population online in 2011) and in states with some of the lowest Internet penetration (Yemen and Libya). The use of social media platforms more than doubled in Arab countries during the protests, with the exception of Libya. Some researchers have shown how collective intelligence, dynamics of the crowd in participatory systems such as social media, has immense power to support a collective action—such as foment a political change. As of 5 April 2011[update], the number of Facebook users in the Arab world surpassed 27.7 million people. Some critics have argued that digital technologies and other forms of communication—videos, cellular phones, blogs, photos, emails, and text messages—have brought about the concept of a "digital democracy" in parts of North Africa affected by the uprisings. Facebook, Twitter, and other major social media played a key role in the movement of Egyptian and Tunisian activists in particular. Nine out of ten Egyptians and Tunisians responded to a poll that they used Facebook to organize protests and spread awareness. This large population of young Egyptian men referred to themselves as "the Facebook generation", exemplifying their escape from their non-modernized past. Furthermore, 28% of Egyptians and 29% of Tunisians from the same poll said that blocking Facebook greatly hindered and/or disrupted communication. Social media sites were a platform for different movements formed by many frustrated citizens, including the 2008 "April 6 Youth Movement" organized by Ahmed Mahed, which set out to organize and promote a nationwide labor strike and which inspired the later creation of the "Progressive Youth of Tunisia". During the Arab Spring, people created pages on Facebook to raise awareness about alleged crimes against humanity, such as police brutality in the Egyptian Revolution (see Wael Ghonim and Death of Khaled Mohamed Saeed). Whether the project of raising awareness was primarily pursued by Arabs themselves or simply advertised by Western social media users is a matter of debate. Jared Keller, a journalist for The Atlantic, claims that most activists and protesters used Facebook (among other social media) to organize; however, what influenced Iran was "good old-fashioned word of mouth". Jared Keller argued that the sudden and anomalous social media output was caused from Westerners witnessing the situation(s), and then broadcasting them. The Middle East and North Africa used texting, emailing, and blogging only to organize and communicate information about internal local protests. A study by Zeynep Tufekci of the University of North Carolina and Christopher Wilson of the United Nations Development Program concluded that "social media in general, and Facebook in particular, provided new sources of information the regime could not easily control and were crucial in shaping how citizens made individual decisions about participating in protests, the logistics of protest, and the likelihood of success." Marc Lynch of George Washington University said, "while social media boosters envisioned the creation of a new public sphere based on dialogue and mutual respect, the reality is that Islamists and their adversaries retreat to their respective camps, reinforcing each other's prejudices while throwing the occasional rhetorical bomb across the no-man's land that the center has become." Lynch also stated in a Foreign Policy article, "There is something very different about scrolling through pictures and videos of unified, chanting Yemeni or Egyptian crowds demanding democratic change and waking up to a gory image of a headless 6-year-old girl on your Facebook news feed." Social networks were not the only instrument for rebels to coordinate their efforts and communicate. In the countries with the lowest Internet penetration and the limited role of social networks, such as Yemen and Libya, the role of mainstream electronic media devices—cellular phones, emails, and video clips (e.g., YouTube)—was very important to cast the light on the situation in the country and spread the word about the protests in the outside world. In Egypt, in Cairo particularly, mosques were one of the main platforms to coordinate the protest actions and raise awareness to the masses. Conversely, scholarship literature on the Middle East, political scientist Gregory Gause has found, had failed to predict the events of the Arab uprisings. Commenting on an early article by Gause whose review of a decade of Middle Eastern studies led him to conclude that almost no scholar foresaw what was coming, Chair of Ottoman and Turkish Studies at Tel Aviv University Ehud R. Toledano writes that Gause's finding is "a strong and sincere mea culpa" and that his criticism of Middle East experts for "underestimating the hidden forces driving change ... while they worked instead to explain the unshakable stability of repressive authoritarian regimes" is well-placed. Toledano then quotes Gause saying, "As they wipe the egg off their faces," those experts "need to reconsider long-held assumptions about the Arab world." Timeline History Tunisia experienced a series of conflicts during the three years leading up to the Arab Spring, the most notable occurring in the mining area of Gafsa in 2008, where protests continued for many months. These protests included rallies, sit-ins, and strikes, during which there were two fatalities, an unspecified number of wounded, and dozens of arrests. In Egypt, the labor movement had been strong for years, with more than 3000 labor actions since 2004, and provided an important venue for organizing protests and collective action. One important demonstration was an attempted workers' strike on 6 April 2008 at the state-run textile factories of al-Mahalla al-Kubra, just outside Cairo. The idea for this type of demonstration spread throughout the country, promoted by computer-literate working-class youths and their supporters among middle-class college students. A Facebook page, set up to promote the strike, attracted tens of thousands of followers and provided the platform for sustained political action in pursuit of the "long revolution". The government mobilized to break the strike through infiltration and riot police, and while the regime was somewhat successful in forestalling a strike, dissidents formed the "6 April Committee" of youths and labor activists, which became one of the major forces calling for the anti-Mubarak demonstration on 25 January in Tahrir Square. In Algeria, discontent had been building for years over a number of issues. In February 2008, US Ambassador Robert Ford wrote in a leaked diplomatic cable that Algeria is "unhappy" with long-standing political alienation; that social discontent persisted throughout the country, with food strikes occurring almost every week; that there were demonstrations every day somewhere in the country; and that the Algerian government was corrupt and fragile.[citation needed] Some claimed that during 2010 there were as many as "9,700 riots and unrests" throughout the country. Many protests focused on issues such as education and health care, while others cited rampant corruption. In Western Sahara, the Gdeim Izik protest camp was erected 12 kilometres (7.5 mi) southeast of El Aaiún by a group of young Sahrawis on 9 October 2010. Their intention was to demonstrate against labor discrimination, unemployment, looting of resources, and human rights abuses. The camp contained between 12000 and 20000 inhabitants, but on 8 November 2010 it was destroyed and its inhabitants evicted by Moroccan security forces. The security forces faced strong opposition from some young Sahrawi civilians, and rioting soon spread to El Aaiún and other towns within the territory, resulting in an unknown number of injuries and deaths. Violence against Sahrawis in the aftermath of the protests was cited as a reason for renewed protests months later, after the start of the Arab Spring. The catalyst for the escalation of protests was the self-immolation of Tunisian Mohamed Bouazizi. Unable to find work and selling fruit at a roadside stand, Bouazizi had his wares confiscated by a municipal inspector on 17 December 2010. An hour later he doused himself with gasoline and set himself afire. His death on 4 January 2011 brought together various groups dissatisfied with the existing system, including many unemployed persons, political and human rights activists, labor and trade unionists, students, professors, lawyers, and others to begin the Tunisian Revolution. The series of protests and demonstrations across the Middle East and North Africa that commenced in 2010 became known as the "Arab Spring", and sometimes as the "Arab Spring and Winter", "Arab Awakening", or "Arab Uprisings", even though not all the participants in the protests were Arab. It was sparked by the first protests that occurred in Tunisia on 18 December 2010 in Sidi Bouzid, following Mohamed Bouazizi's self-immolation in protest of police corruption and ill treatment. With the success of the protests in Tunisia, a wave of unrest sparked by the Tunisian "Burning Man" struck Algeria, Jordan, Egypt, and Yemen, then spread to other countries. The largest, most organized demonstrations often occurred on a "day of rage", usually Friday afternoon prayers. The protests also triggered similar unrest outside the region. Contrary to expectations the revolutions were not led by Islamists: Even though the Islamists were certainly present during the uprisings, they never determined the directions of these movements—after all, there was hardly any central leadership in any of the uprisings. Some Islamist groups initially were even reluctant to join in the protests, and the major religious groups in Egypt—Salafis, al-Azhar, and the Coptic Church—initially opposed the revolution. The mufti of Egypt, Ali Gomaa, proclaimed that rising against a lawful ruler—President Mubarak—was haram, not permissible. And the Muslim Brotherhood's old guard joined in the protests reluctantly only after being pushed by the group's young people. The Arab Spring caused the "biggest transformation of the Middle East since decolonization". By the end of February 2012, rulers had been forced from power in Tunisia, Egypt, Libya, and Yemen; civil uprisings had erupted in Bahrain and Syria; major protests had broken out in Algeria, Iraq, Jordan, Kuwait, Morocco, Oman, and Sudan; and minor protests had occurred in Mauritania, Saudi Arabia, Djibouti, Western Sahara, and Palestine. Tunisian President Zine El Abidine Ben Ali fled to Saudi Arabia on 14 January 2011 following the Tunisian Revolution protests. Egyptian President Hosni Mubarak resigned on 11 February 2011 after 18 days of massive protests, ending his 30-year presidency. The Libyan leader Muammar Gaddafi was overthrown on 23 August 2011, after the National Transitional Council (NTC) took control of Bab al-Azizia. He was killed on 20 October 2011 in his hometown of Sirte after the NTC took control of the city. Yemeni President Ali Abdullah Saleh signed the GCC power-transfer deal in which a presidential election was held, resulting in his successor Abdrabbuh Mansur Hadi formally replacing him as president on 27 February 2012 in exchange for immunity from prosecution. Weapons and Tuareg fighters returning from the Libyan Civil War stoked a simmering conflict in Mali that has been described as 'fallout' from the Arab Spring in North Africa. During this period, several leaders announced their intentions to step down at the end of their current terms. Sudanese President Omar al-Bashir announced that he would not seek reelection in 2015 (he ultimately retracted his announcement and ran anyway), as did Iraqi Prime Minister Nouri al-Maliki, whose term was to end in 2014, although there were violent demonstrations demanding his immediate resignation in 2011. Protests in Jordan also caused the sacking of four successive governments by King Abdullah. The popular unrest in Kuwait also resulted in the resignation of Prime Minister Nasser Al-Sabah's cabinet. The geopolitical implications of the protests drew global attention. Some protesters were nominated for the 2011 Nobel Peace Prize. Tawakkol Karman of Yemen was co-recipient of the 2011 Nobel Peace Prize due to her role organizing peaceful protests. In December 2011 Time magazine named "The Protester" its "Person of the Year". Spanish photographer Samuel Aranda won the 2011 World Press Photo award for his image of a Yemeni woman holding an injured family member, taken during the civil uprising in Yemen on 15 October 2011. Overthrow of Mohamed Morsi, who was convicted of espionage and inciting the killing of protestors. Overthrow of Bashar al-Assad; Assad flees into exile in Russia Yemeni crisis begins, followed by a civil war Start of War in Iraq (2013–2017) (combined estimate of events) The protests in Bahrain started on 14 February, and were initially aimed at achieving greater political freedom and respect for human rights; they were not intended to directly threaten the monarchy.: 162–3 Lingering frustration among the Shiite majority with being ruled by the Sunni government was a major root cause, but the protests in Tunisia and Egypt are cited as the inspiration for the demonstrations.: 65 The protests were largely peaceful until a pre-dawn raid by police on 17 February to clear protestors from Pearl Roundabout in Manama, in which police killed four protesters.: 73–4 Following the raid, some protesters began to expand their aims to a call for the end of the monarchy. On 18 February, army forces opened fire on protesters when they tried to reenter the roundabout, fatally wounding one.: 77–8 The following day protesters reoccupied Pearl Roundabout after the government ordered troops and police to withdraw.: 81 Subsequent days saw large demonstrations; on 21 February a pro-government Gathering of National Unity drew tens of thousands,: 86 whilst on 22 February the number of protestors at the Pearl Roundabout peaked at over 150000 after more than 100000 protesters marched there and were coming under fire from the Bahraini Military which killed around 20 and injured over 100 protestors.: 88 On 14 March, GCC forces (composed mainly of Saudi and UAE troops) were requested by the government and occupied the country.: 132 King Hamad bin Isa Al Khalifa declared a three-month state of emergency on 15 March and asked the military to reassert its control as clashes spread across the country.: 139 On 16 March, armed soldiers and riot police cleared the protesters' camp in the Pearl Roundabout, in which 3 policemen and 3 protesters were reportedly killed.: 133–4 Later, on 18 March, the government tore down Pearl Roundabout monument.: 150 After the lifting of emergency law on 1 June, several large rallies were staged by the opposition parties. Smaller-scale protests and clashes outside of the capital have continued to occur almost daily. On 9 March 2012, over 100000 protested in what the opposition called "the biggest march in our history". The police response has been described as a "brutal" crackdown on peaceful and unarmed protestors, including doctors and bloggers. The police carried out midnight house raids in Shia neighbourhoods, beatings at checkpoints, and denial of medical care in a "campaign of intimidation". More than 2,929 people have been arrested, and at least five people died due to torture while in police custody.: 287,288 On 23 November 2011, the Bahrain Independent Commission of Inquiry released its report on its investigation of the events, finding that the government had systematically tortured prisoners and committed other human rights violations.: 415–422 It also rejected the government's claims that the protests were instigated by Iran. Although the report found that systematic torture had stopped,: 417 the Bahraini government has refused entry to several international human rights groups and news organizations, and delayed a visit by a UN inspector. More than 80 people had died since the start of the uprising. Even a decade after the 2011 uprisings, the situation in Bahrain remained unchanged. The regime continued suppression against all forms of dissent. Years after the demonstrations, the Bahraini authorities are known to have accelerated their crackdown. They have been targeting human rights defenders, journalists, Shiite political groups and social media critics. Saudi government forces quashed protests in the country and assisted Bahraini authorities in suppressing demonstrations there. Jamal Khashoggi, a Saudi critic, covered the Arab Spring and spoke out against the Saudi government during this time. He was murdered by the government a few years later. Inspired by the uprising in Tunisia and prior to his entry as a central figure in Egyptian politics, potential presidential candidate Mohamed ElBaradei warned of a "Tunisia-style explosion" in Egypt. Protests in Egypt began on 25 January 2011 and ran for 18 days. Beginning around midnight on 28 January, the Egyptian government attempted, somewhat successfully, to eliminate the nation's Internet access, in order to inhibit the protesters' ability to use media activism to organize through social media. Later that day, as tens of thousands protested on the streets of Egypt's major cities, President Hosni Mubarak dismissed his government, later appointing a new cabinet. Mubarak also appointed the first Vice President in almost 30 years. The U.S. embassy and international students began a voluntary evacuation near the end of January, as violence and rumors of violence escalated. On 10 February, Mubarak ceded all presidential power to Vice President Omar Suleiman, but soon thereafter announced that he would remain as president until the end of his term. However, protests continued the next day, and Suleiman quickly announced that Mubarak had resigned from the presidency and transferred power to the Armed Forces of Egypt. The military immediately dissolved the Egyptian Parliament, suspended the Constitution of Egypt, and promised to lift the nation's thirty-year "emergency laws". A civilian, Essam Sharaf, was appointed as Prime Minister of Egypt on 4 March to widespread approval among Egyptians in Tahrir Square. Violent protests, however, continued through the end of 2011 as many Egyptians expressed concern about the Supreme Council of the Armed Forces' perceived sluggishness in instituting reforms and their grip on power. Hosni Mubarak and his former interior minister Habib el-Adly were sentenced to life in prison on the basis of their failure to stop the killings during the first six days of the 2011 Egyptian Revolution. His successor, the Muslim Brotherhood-affiliated Islamist Mohamed Morsi, won a presidential election in 2012 regarded as free and fair by election observers, and was subsequently sworn in before judges at the Supreme Constitutional Court. Fresh protests against Morsi erupted in Egypt on 22 November 2012. More protests against Morsi's rule occurred one year into Morsi's presidency in June 2013, and on 3 July 2013, the military overthrew Morsi's government, thus removing him from office. The Arab Spring was generally considered to have been a success in Egypt, much like in Tunisia. However, a December 2020 report published by PRI's The World, a US-based public radio news magazine, suggests otherwise. The report says that the Egyptian government increased the amount of executions that it carried out by more than twofold, with the report saying that the government put to death approximately 60 people. This number, according to the report, included human rights activists of the Egyptian Initiative for Personal Rights (EIPR), who were arrested in November 2020. The executive director of the Project on Middle East Democracy, Stephen McInerney, said that a majority of pro-democracy activists had escaped Egypt, while those who could not had gone into hiding. The Project on Middle East Democracy mentioned using encrypted communication channels to talk to the activists regarding the protection of their whereabouts. Western countries are perceived to have generally overlooked these issues, including the United States, France, and several other European countries. The founder of the Tahrir Institute for Middle East Policy in Washington, DC believed that even ten years after the Arab Spring, Egypt was at its lowest for human rights. Anti-government protests began in Libya on 15 February 2011. By 18 February, the opposition controlled most of Benghazi, the country's second-largest city. The government dispatched elite troops and militia in an attempt to recapture it, but they were repelled. By 20 February, protests had spread to the capital Tripoli, leading to a television address by Saif al-Islam Gaddafi, who warned the protestors that their country could descend into civil war. The rising death toll, numbering in the thousands, drew international condemnation and resulted in the resignation of several Libyan diplomats, along with calls for the government's dismantlement. Amidst ongoing efforts by demonstrators and rebel forces to wrest control of Tripoli from the Jamahiriya, the opposition set up an interim government in Benghazi to oppose Colonel Muammar Gaddafi's rule. However, despite initial opposition success, government forces subsequently took back much of the Mediterranean coast. On 17 March, United Nations Security Council Resolution 1973 was adopted, authorising a no-fly zone over Libya, and "all necessary measures" to protect civilians. Two days later, France, the United States and the United Kingdom intervened in Libya with a bombing campaign against pro-Gaddafi forces. A coalition of 27 states from Europe and the Middle East soon joined the intervention. The forces were driven back from the outskirts of Benghazi, and the rebels mounted an offensive, capturing scores of towns across the coast of Libya. The offensive stalled however, and a counter-offensive by the government retook most of the towns, until a stalemate was formed between Brega and Ajdabiya, the former being held by the government and the latter in the hands of the rebels. Focus then shifted to the west of the country, where bitter fighting continued. After a three-month-long battle, a loyalist siege of rebel-held Misrata, the third largest city in Libya, was broken in large part due to coalition air strikes. The four major fronts of combat were generally considered to be the Nafusa Mountains, the Tripolitanian coast, the Gulf of Sidra, and the southern Libyan Desert. In late August, anti-Gaddafi fighters captured Tripoli, scattering Gaddafi's government and marking the end of his 42 years of power. Many institutions of the government, including Gaddafi and several top government officials, regrouped in Sirte, which Gaddafi declared to be Libya's new capital. Others fled to Sabha, Bani Walid, and remote reaches of the Libyan Desert, or to surrounding countries. However, Sabha fell in late September, Bani Walid was captured after a grueling siege weeks later, and on 20 October, fighters under the aegis of the National Transitional Council seized Sirte, killing Gaddafi in the process. However, after Gaddafi was killed, the Civil War continued. Protests in Syria started on 26 January 2011, when a police officer assaulted a man in public at "Al-Hareeka Street" in old Damascus. The man was arrested right after the assault. As a result, protesters called for the freedom of the arrested man. Soon a "day of rage" was set for 4–5 February, but it was uneventful. On 6 March, the Syrian security forces arrested about 15 children in Daraa, in southern Syria, for writing slogans against the government. Soon protests erupted over the arrest and abuse of the children. Daraa was to be the first city to protest against the Ba'athist government, which has been ruling Syria since 1963. Thousands of protesters gathered in Damascus, Aleppo, al-Hasakah, Daraa, Deir ez-Zor, and Hama on 15 March, with recently released politician Suhair Atassi becoming an unofficial spokesperson for the "Syrian revolution". The next day there were reports of approximately 3000 arrests and a few casualties, but there are no official figures on the number of deaths. On 18 April 2011, approximately 100,000 protesters sat in the central Square of Homs calling for the resignation of president Bashar al-Assad. Protests continued through July 2011, the government responding with harsh security clampdowns and military operations in several districts, especially in the north. On 31 July, Syrian army tanks stormed several cities, including Hama, Deir Ez-Zour, Abu Kamal, and Herak near Daraa. At least 136 people were killed, the highest death toll in any day since the start of the uprising. On 5 August 2011, an anti-government demonstration took place in Syria called "God is with us", during which the Syrian security forces shot the protesters from inside the ambulances, killing 11 people. The Arab Spring events in Syria subsequently escalated into the Syrian civil war. The war caused massive political instability and economic hardship in Syria, with the Syrian pound plunging to new lows. On 8 December 2024, the Assad regime collapsed during a major offensive by opposition forces. The offensive was spearheaded by Hay'at Tahrir al-Sham (HTS) and supported mainly by the Turkish-backed Syrian National Army. As another rebel coalition advanced towards Damascus, reports emerged that Bashar al-Assad fled the capital aboard a plane to Russia, where he joined his family, already in exile, and was granted asylum. Following Assad's departure, opposition forces declared victory on state television. Concurrently, the Russian Ministry of Foreign Affairs confirmed Assad's resignation and his departure from Syria. The fall of the Assad regime after 54 years of rule and 13 years of civil war was met with shock and surprise throughout Syria and the world. Syrian opposition fighters were surprised at how quickly the Syrian government collapsed in the wake of their offensive. Analysts viewed the event as a significant blow to Iran's Axis of Resistance due to the use of Syria as a waypoint to supply arms and supplies to their ally Hezbollah. Following the self-immolation of Mohamed Bouazizi in Sidi Bouzid, a series of increasingly violent street demonstrations through December 2010 ultimately led to the ousting of longtime President Zine El Abidine Ben Ali on 14 January 2011. The demonstrations were preceded by high unemployment, food inflation, corruption, lack of freedom of speech and other forms of political freedom, and poor living conditions. The protests constituted the most dramatic wave of social and political unrest in Tunisia in three decades and resulted in scores of deaths and injuries, most of which were the result of action by police and security forces against demonstrators. Ben Ali fled into exile in Saudi Arabia, ending his 23 years in power. A state of emergency was declared and a caretaker coalition government was created following Ben Ali's departure, which included members of Ben Ali's party, the Constitutional Democratic Rally (RCD), as well as opposition figures from other ministries. The five newly appointed non-RCD ministers resigned almost immediately. As a result of continued daily protests, on 27 January Prime Minister Mohamed Ghannouchi reshuffled the government, removing all former RCD members other than himself, and on 6 February the former ruling party was suspended; later, on 9 March, it was dissolved. Following further public protests, Ghannouchi himself resigned on 27 February, and Beji Caid Essebsi became prime minister. On 23 October 2011 Tunisians voted in the first post-revolution election to elect representatives to a 217-member constituent assembly that would be responsible for the new constitution. The leading Islamist party, Ennahda, won 37% of the vote, and elected 42 women to the Constituent Assembly. On 26 January 2014 a new constitution was adopted. The constitution is seen as progressive, increasing human rights, gender equality, and government duties toward people, laying the groundwork for a new parliamentary system and making Tunisia a decentralized and open government. On 26 October 2014 Tunisia held its first parliamentary elections since the 2011 Arab Spring and its presidential election on 23 November 2014, finishing its transition to a democratic state. These elections were characterized by a decline in Ennahdha's popularity in favor of the secular Nidaa Tounes party, which became the first party of the country. In the United Arab Emirates, the Arab Spring saw a sudden and intense demand for democratic reforms. However, government repression of human rights, including unlawful detentions and torture, quelled the opposition and silenced dissenters. Even years after the Arab Spring uprisings, the Emirates remain in staunch opposition to free speech. In 2011, 133 peaceful political activists—including academics and members of a social organization, Islah—signed a petition calling for democratic reforms. Submitted to the Emirati monarch rulers, the petition demanded elections, more legislative powers for the Federal National Council and an independent judiciary. In 2012, the authorities arrested 94 of the 133 journalists, government officials, judges, lawyers, teachers and student activists, who were detained in secret detention facilities. For a year, until the trial began in March 2013, the 94 prisoners were subjected to enforced disappearances and torture. As the "unfair" trial ended on 2 July 2013, 69 men were convicted on the basis of evidence acquired through forced confessions, and received harsh prison sentences of up to 15 years. The case came to be known as "UAE-94", following which freedom of speech was further curbed. For years, these prisoners have been under arbitrary detention, with some "held in incommunicado, and denied their rights". In July 2021, Amnesty International called the UAE authorities to immediately release 60 prisoners of the UAE-94 case, who remained detained nine years after their arrest. At least 51 prisoners, who were part of the "UAE-94" mass trial, were being imprisoned despite completing their sentences. Some prisoners completed their sentences in March 2023, while others completed it as early as July 2019. HRW said that those the prisoners continued to remain in prison without a proper legal basis, even after completing the sentences between one month and nearly four years before. Following the 2011 petition, the UAE authorities also arrested five prominent human rights defenders and government critics who did not sign the petition. All were pardoned the next day but have been facing a number of unfair acts of the government. One of the prominent Emirati activists, Ahmed Mansoor, reported being beaten twice since then. His passport was confiscated and nearly $140000 were stolen from his personal bank account. Most of the human rights activists have been victims of the UAE government's intimidation for years. The authorities also exiled an activist to Thailand. He spoke out about the government. Protests occurred in many towns in both the north and south of Yemen starting in mid-January 2011. Demonstrators in the South mainly protested against President Saleh's support of Al Qaeda in South Yemen, the marginalization of the Southern people and the exploitation of Southern natural resources. Other parts of the country initially protested against governmental proposals to modify the constitution of Yemen, unemployment and economic conditions, and corruption, but their demands soon included a call for the resignation of President Ali Abdullah Saleh, who had been facing internal opposition from his closest advisors since 2009. A major demonstration of over 16000 protesters took place in Sanaa on 27 January 2011, and soon thereafter human rights activist and politician Tawakkol Karman called for a "Day of Rage" on 3 February. According to Xinhua News, organizers were calling for a million protesters. In response to the planned protest, Ali Abdullah Saleh stated that he would not seek another presidential term in 2013. On 3 February, 20000 protesters demonstrated against the government in Sana'a, others participated in a "Day of Rage" in Aden that was called for by Tawakel Karman, while soldiers, armed members of the General People's Congress, and many protestors held a pro-government rally in Sana'a. Concurrent with the resignation of Egyptian president Mubarak, Yemenis again took to the streets protesting President Saleh on 11 February, in what has been dubbed a "Friday of Rage". The protests continued in the days following despite clashes with government advocates. In a "Friday of Anger" held on 18 February, tens of thousands of Yemenis took part in anti-government demonstrations in the major cities of Sana'a, Taiz, and Aden. Protests continued over the following months, especially in the three major cities, and briefly intensified in late May into urban warfare between Hashid tribesmen and army defectors allied with the opposition on one side and security forces and militias loyal to Saleh on the other. After Saleh pretended to accept a Gulf Cooperation Council-brokered plan allowing him to cede power in exchange for immunity from prosecution only to back away before signing three separate times, an assassination attempt on 3 June left him and several other high-ranking Yemeni officials injured by a blast in the presidential compound's mosque. Saleh was evacuated to Saudi Arabia for treatment and handed over power to Vice President Abdrabbuh Mansur Hadi, who largely continued his policies and ordered the arrest of several Yemenis in connection with the attack on the presidential compound. While in Saudi Arabia, Saleh kept hinting that he could return any time and continued to be present in the political sphere through television appearances from Riyadh starting with an address to the Yemeni people on 7 July. On 13 August, a demonstration was announced in Yemen as "Mansouron Friday" in which hundreds of thousands of Yemenis called for Saleh to go. The protesters joining the "Mansouron Friday" were calling for establishment of "a new Yemen". On 12 September Saleh issued a presidential decree while still receiving treatment in Riyadh authorizing Hadi to negotiate a deal with the opposition and sign the GCC initiative. On 23 September, three months since the assassination attempt, Saleh returned to Yemen abruptly, defying all earlier expectations. Pressure on Saleh to sign the GCC initiative eventually led to his doing so in Riyadh on 23 November. Saleh thereby agreed to step down and set the stage for the transfer of power to his vice president. A presidential election was then held on 21 February 2012, in which Hadi (the only candidate) won 99.8% of the vote. Hadi then took the oath of office in Yemen's parliament on 25 February. By 27 February Saleh had resigned from the presidency and transferred power to Hadi. The replacement government was overthrown by Houthi rebels on 22 January 2015, starting the Yemeni Civil War and the Saudi Arabian-led intervention in Yemen. Outcomes In the aftermath of the Arab Spring in various countries, there was a wave of violence and instability commonly known as the Arab Winter or Islamist Winter. The Arab Winter was characterized by extensive civil wars, general regional instability, economic and demographic decline of the Arab League and overall religious wars between Sunni and Shia Muslims. Although the long-term effects of the Arab Spring have yet to be shown, its short-term consequences varied greatly across the Middle East and North Africa. In Tunisia and Egypt, where the existing regimes were ousted and replaced through a process of free and fair election, the revolutions were considered short-term successes. This interpretation is, however, problematized by the subsequent political turmoil that emerged in Egypt and the autocracy that has formed in Tunisia. Elsewhere, most notably in the monarchies of Morocco and the Persian Gulf, existing regimes co-opted the Arab Spring movement and managed to maintain order without significant social change. In other countries, particularly Syria and Libya, the apparent result of Arab Spring protests was a complete societal collapse.[failed verification – see discussion] Social scientists have endeavored to understand the circumstances that led to this variation in outcome. A variety of causal factors have been highlighted, most of which hinge on the relationship between the strength of the state and the strength of civil society. Countries with stronger civil society networks in various forms underwent more successful reforms during the Arab Spring; these findings are also consistent with more general social science theories such as those espoused by Robert D. Putnam and Joel S. Migdal. One of the primary influences that have been highlighted in the analysis of the Arab Spring is the relative strength or weakness of a society's formal and informal institutions prior to the revolts. When the Arab Spring began, Tunisia had an established infrastructure and a lower level of petty corruption than did other states, such as Libya. This meant that, following the overthrow of the existing regime, there was less work to be done in reforming Tunisian institutions than elsewhere, and consequently it was less difficult to transition to and consolidate a democratic system of government. Also crucial was the degree of state censorship over print, broadcast, and social media in different countries. Television coverage by channels like Al Jazeera and BBC News provided worldwide exposure and prevented mass violence by the Egyptian government in Tahrir Square, contributing to the success of the Egyptian Revolution. In other countries, such as Libya, Bahrain, and Syria, such international press coverage was not present to the same degree, and the governments of these countries were able to act more freely in suppressing the protests. Strong authoritarian regimes with high degrees of censorship in their national broadcast media were able to block communication and prevent the domestic spread of information necessary for successful protests. Countries with greater access to social media, such as Tunisia and Egypt, proved more effective in mobilizing large groups of people, and appear to have been more successful overall than those with greater state control over media. Although social media played a large role in shaping the events of revolutions social activism did not occur in a vacuum. Without the use of street level organization social activists would not have been as effective. Even though a revolution did take place and the prior government has been replaced, Tunisia's government can not conclude that another uprising will not take place. There are still many grievances taking place today. In Tunisia, due to tourism coming to a halt and other factors during the revolution and Arab Spring movement, the budget deficit has grown and unemployment has risen since 2011. According to the World Bank in 2016, "Unemployment remains at 15.3% from 16.7% in 2011, but still well above the pre-revolution level of 13%." Large scale emigration brought on by a long and treacherous civil war has permanently harmed the Syrian economy. Projections for economic contraction will remain high at almost 7% in 2017. The support, even if tacit, of national military forces during protests has been correlated to the success of the Arab Spring movement in different countries. In Egypt and Tunisia, the military actively participated in ousting the incumbent regime and in facilitating the transition to democratic elections. Countries like Saudi Arabia, on the other hand, exhibited a strong mobilization of military force against protesters, effectively ending the revolts in their territories; others, including Libya and Syria, failed to stop the protests entirely and instead ended up in civil war. The support of the military in Arab Spring protests has also been linked to the degree of ethnic homogeneity in different societies. In Saudi Arabia and Syria, where the ruling elite was closely linked with ethnic or religious subdivisions of society, the military sided with the existing regime and took on the ostensible role of protector to minority populations. The presence of a strong, educated middle class has been noted as a correlate to the success of the Arab Spring in different countries. Countries with strong welfare programs and a weak middle class, such as Saudi Arabia and Jordan, as well as countries with great economic disparity and an impoverished working class—including Yemen, Libya, and Morocco—did not experience successful revolutions. The strength of the middle class is, in turn, directly connected to the existing political, economic, and educational institutions in a country, and the middle class itself may be considered an informal institution. In very broad terms, this may be reframed in terms of development, as measured by various indicators such as the Human Development Index: rentier states such as the oil monarchies of the Persian Gulf exhibited less successful revolutions overall. Charting what he calls the 'new masses' of the twenty-first century, Sociologist Göran Therborn draws attention to the historical contradictory role of the middle class. The Egyptian middle class has illustrated this ambivalence and contradiction in 2011 and 2013: "The volatility of middle-class politics is vividly illustrated by the sharp turns in Egypt, from acclamation of democracy to adulation of the military and its mounting repression of dissent, effectively condoning the restoration of the ancien régime minus Mubarak. Some trends in political Islam resulting from the Arab Spring noted by observers (Quinn Mecham and Tarek Osman) include: "The repercussions of the 2011 uprisings have influenced Middle Eastern youth's experiences providing impetus for questioning perennial sacred beliefs and positions, and forging ahead avant-garde views and responses to the constraints they face." Contrary to the common discourse, Hussein Agha and Robert Malley from The New Yorker argue that the divide in the post–Arab Spring in the Middle East is not sectarianism: The bloodiest, most vicious, and most pertinent struggles occur squarely inside the Sunni world. Sectarianism is a politically expedient fable, conveniently used to cover up old-fashioned power struggles, maltreatment of minorities, and cruel totalitarian practices. Agha and Malley point out that even in Syria there has been a misrepresentation of the conflict, that the Assad regime relied on an alliance that included middle class Sunnis along with other religious minorities. Prior to the uprising, the Syrian regime enjoyed some financial and political support from Sunni Gulf states. The "select rich urban bourgeoisie, the Sunni Damascene in particular", according to Tokyo University researcher Housam Darwisheh, "now has a direct interest in preserving stability and their relations with the regime as long as their businesses prosper." In the view of the Arab sociologist Halim Barakat, "the persistence of communal cleavages complicates rather than nullifies social class consciousness and struggles." The 2018–2024 Arab protests were a series of anti-government protests which began in several Arab world countries in 2018. In Iraq, the deadliest incident of civil unrest since the fall of Saddam Hussein resulted in its prime minister Adil Abdul-Mahdi being replaced. Sustained civil disobedience in Sudan resulted in the overthrow of president Omar al-Bashir in a military coup d'état, the Khartoum massacre, and the transfer of power from a military junta to the Transitional Sovereignty Council but led to a civil war in 2023. In Algeria, a series of mass protests resulted in the resignation of president Abdelaziz Bouteflika, and the postponement of the scheduled presidential election. Other protests also took place in Egypt, Jordan, Lebanon, Morocco, Syria, and Tunisia, along with economic protests in the Gaza Strip. Revolution or reform Very few analysts of the Arab societies foresaw a mass movement on such a scale that might threaten the existing order. In his 1993 sociological study of the Arab societies, culture and state, Barakat stated confidently that "one should expect the first Arab popular revolution to take place in Egypt or Tunisia. This does not, however, exclude the possibility that revolutions may occur in more pluralistic societies as well." What was prevalent, according to the Syrian writer and political dissident Yassin al-Haj Saleh was three 'springs' that ensured the status quo. One of which was a "spring of despotic states that receive assistance and legitimacy from a world system centered around stability". Most democracy protests do not result in reforms. Two months into the Tunisian and Egyptian uprisings, The Economist magazine in a leader article spoke about a new generation of young people, idealists, "inspired by democracy", which made revolutions. Those revolutions, the article stated, "are going the right way, with a hopeful new mood prevailing and free elections in the offing". For those on the streets of Egypt the predominant slogan was "bread, freedom and social justice". Some observers, however, have questioned the revolutionary nature of the 'Arab Spring'. A social theorist specialising in social movements and social change in the Middle East, Asef Bayat, has provided an analysis based on his decades-long of research as "a participant-observer" (in his own words). In his appraisal of the Arab revolutions, Bayat discerns a remarkable difference between these revolutions and the revolutions of the 1960s and 1970s in countries like Yemen, Nicaragua and Iran. The Arab revolutions, argues Bayat, "lacked any associated intellectual anchor" and the predominant voices, "secular and Islamists alike, took free market, property relations, and neoliberal rationality for granted" and uncritically. New social movements' define themselves as horizontal networks with aversion to the state and central authority. Thus their "political objective is not to capture the state", a fundamental feature in the twentieth-century revolutionary movements. Instead of revolution or reform, Bayat speaks of 'refolution'. Wael Ghonim, an Internet activist who would later gain international fame, acknowledged that what he had intended by founding a Facebook page was a "simple reaction to the events in Tunisia" and that "there was no master plans or strategies" a priori. That the objective was reform to be achieved through peaceful means and not revolution was explicitly put forward by April 6 Movement, one of the leading forces of the Egyptian uprising, in their statements. It called for "coalition and co-operation between all factions and national forces to reach the reform and the peaceful change of the conditions of Egypt". "Even in Tahrir Square with so many people and the rising level of demands," recalls an activist in the movement, "we were very surprised by the people wanting the downfall of the regime; and not a single one of us had expected this." In comparing the uprisings in Tunisia, Egypt, Libya and Syria, researcher Housam Darwisheh concludes: "The Egyptian uprising, in neither dismantling the ancien regime nor creating new institutional mechanisms to lead the transition, permitted the so-called 'deep state' to reassert itself while the deepening polarization led many non-Islamists to side with the military against the MB [the Muslim Brotherhood]." According to Cambridge sociologist Hazem Kandil, the Muslim Brotherhood did not aim at taking power during the events leading up to the toppling of Mubarak. The biggest and most organised organisation in Egypt in fact negotiated with the regime in "infamous talks between Morsi and the then vice-president Omar Suleiman", and "an informal deal was reached: withdraw your members from Tahrir Square, and we allow you to form a political party." Then the Brotherhood wavered whether to file a presidential candidate and did not push for a new constitution, choosing to work with the Supreme Council of the Armed Forces (SCAF): The Brotherhood and the Salafists went all-out to keep the existing constitution—originating under Sadat—with a few amendments. The result was irrelevant, because the military scrapped the old constitution anyway. But the Brothers managed to persuade over 70 per cent of the voters, so it became clear to the military that they had far more sway on the street than the secular revolutionaries who had brought down Mubarak, yet seemed incapable of much organization once they had done so. For SCAF, the priority was to bring the street under control, so it decided to start working with the Brotherhood to stabilize the country. George Lawson from the London School of Economics places the Arab uprisings within the post-Cold War world. He characterises the uprisings as "largely unsuccessful revolution" and that they "bare a family resemblance to the 'negotiated revolutions'... Negotiated revolutions ... seek to transform political and symbolic fields of action, but without a concomitant commitment to a program of economic transformation." In this 'negotiated revolution', comments Bayat, "revolutionaries had in effect little part in the 'negotiations'." What has been treated by some analysts as intellectual weakness of the revolutionary movement is partly due to the pre-2011 stifling cultural environment under repressive regimes. Although Egyptian intellectuals enjoyed a bigger margin of freedom than their counterparts in Tunisia, cultural figures sought protection from political players, and instead of leading criticism, they complied. The post-Cold War era saw the emergence of the idea and practice of gradual reform and liberal agenda. It saw an influx of humanitarian projects, NGOs and charity work, liberal think tanks and emphasis on civil society work. This new juncture seemed to have made the idea and prospect of revolution an outdated project. The focus instead shifted to individual freedoms and free market. The new idea of civil society was different from the kind of civil society Antonio Gramsci, for instance, envisaged: 'a revolution before the revolution'. In her field study in Yemen, anthropologist Bogumila Hall depicts the effects of what she terms as "the marketization of civil society and its heavy reliance on donors", which "led to a largely depoliticized form of activism that by passed, rather than confronted, the state". Hall, with her focus on the muhammashīn (the marginalized) in Yemen, described how in the 1990s and 2000s international NGOs established charity projects and workshops "to teach slum dwellers new skills and behaviours". But, besides the "modest changes" brought by the NGOs, concludes Hall, "delegating the problem of the muhammashīn to the realm of development and poverty alleviation, without addressing the structural causes underlying their marginalisation, had a depoliticising effect. It led to a widely held assumption, also shared by the muhammashīn, that ending marginalisation was a matter for experts and administrative measures, not politics." When Arab regimes viewed NGOs' leaders and other similar organisations with suspicion, accusing Western governments of providing funding and training to 'illegal organisations' and fomenting revolution, diplomatic cables reported "how American officials frequently assured skeptical governments that the training was aimed at reform, not promoting revolutions". And when the Egyptian uprising was gaining its momentum, the American president Barack Obama "did not suggest that the 82-year-old leader step aside or transfer power... the argument was that he really needed to do the reforms, and do them fast. Former ambassador to Egypt (Frank G.) Wisner publicly suggested that Mr. Mubarak had to be at the center of any change, and Secretary of State Hillary Rodham Clinton warned that any transition would take time." Some activists, who read the American thinker and nonviolence advocate Gene Sharp, obtained training from foreign bodies, including the Serbian opposition movement Otpor!, and April 6 Movement modelled its logo after Otpor's. Otpor, writes Bayat in his discussion of the agencies of the Arab Spring activism in Tunisia and Egypt, obtained funds from well-known American organisations such as the American National Endowment for Democracy, USAID, and the International Republican Institute. Thus Otpur, in line with these organisations' advocacies, "pushed for political reform through nonradical, electoral, and market-driven language and practices". Early 2019 witnessed two uprisings: one in Algeria and another in Sudan. In Algeria under pressure of weeks of protests, the head of the army forced the ailing twenty-year-serving president, Abdelaziz Bouteflika, to abdicate. In Sudan, after four months of protests, the Sudani defense minister ousted longtime President Omar al-Bashir in a coup. Writing about what he calls "a rebirth of Tahrir Square", the prominent Lebanese novelist and critic Elias Khoury, averred that "perhaps the secret of the Arab Spring lies not in its victories or defeats, but in its ability to liberate people from fear." Despite the "faded spirit of Tahrir Square" and an outcome that Khoury describes as a "monarchy that abrogates legal standards", a renaissance of resistance is unstoppable: The defeat of the Arab Spring has seemed likely to extinguish this glimmer of hope, to return the Arab world to the tyrannical duopoly of military and oil and to crush the will of the people in the struggle between Sunni and Shia, between Saudi Arabia and Iran. The combination has thrown the region into Israel's lap. But the defeat cannot and will not stop the renaissance. If the Arab world has reached rock bottom, it can't go any lower and it can't last forever. There was a need, suggested Khoury, to turn "the uprisings of the Arab Spring into an intellectual, political and moral project that gives meaning to the goals of freedom, democracy and social justice". From the outset the 2011 Arab uprisings raised the banner of 'social justice'. The concept, what it means and how to achieve it has been a major subject of discussion and contention since then. In its economic and social manifesto, the Tunisian Ennahda Movement states that the movement "adopts the social and solidarised market economy within a national approach based on free economic activity, freedom of ownership, production and administration on the one hand, and social justice and equal opportunities on the other hand" and that "national capital has to be the axis in the development process." The Muslim Brotherhood in Egypt mainly focuses on "reform of existing political systems in the Arab world. It embraces the idea of political activism and social responsibility, organising charitable works and social support programmes as part of its outreach to its core support base of lower-income populations." On its part the International Centre for Transitional Justice has set nine 'concrete and tangible' goals with focus on "accountability for serious violations of human rights, access to justice, facilitating peace processes, advancing the cause of reconciliation and reforming the state and social institutions". One of those goals was taken up by Truth and Dignity Commission (Tunisia) that recorded and submitted to the relevant court the human rights abuses which had been committed by the Tunisian regime. A new climate of freedom of speech, freedom of organisation and elections characterised the political environment of post-Ben Ali Tunisia. Some observers and social analysts remarked, however, that the issue of social justice remained a rhetoric or was marginalised. According to Fathi Al-Shamikhi, an expert in debt issues and founder of the Tunisian association RAID, different social forces played a crucial role in matters related to social demands and achieving social justice. "This role varies between those who advocate these demands and those who reject them, according to the social nature of each of these forces." "Bread, freedom and social justice" were the main slogans of the Arab revolutions. But although social and economic demands were raised, argued researcher and former editor in chief of the Egyptian Al-Shorouq Newspaper, Wael Gamal, "they were pushed aside in the political arena, and more attention was given to issues such as the transfer of power arrangements, the constitution first, the elections first, democratic transformation and the religious-secular conflict." With the survival of the regime in Egypt and the rolling back of what was gained in the short period after the overthrow of Mubarak, the persistence, or even the worsening, of the socio-economic conditions that led to the Tunisian uprising, a Saudi-led intervention in Bahrain assisted the defeat of the uprising in the country, and especially the descent of other uprisings into brutal civil wars in Syria, Libya and Yemen, with acute humanitarian crises, there are: many in capitals around the world who find it convenient to insist that a strongman is needed to deal with the peoples of this region. It is a racist, bigoted argument and should be called out as such, but many political leaders of the region are quite comfortable promoting it. Indeed, many of the counterrevolutionary moves in the region happened precisely because they agree with that argument. In April 2019, amidst an offensive to take Libya's capital city. of Tripoli by military leader Khalifa Haftar, for whom U.S. President Donald Trump had voiced his support, the Syrian policy scholar Marwan Kabalan argued in an opinion piece for Al Jazeera that "counter-revolutionary forces are seeking to resurrect the military dictatorship model the Arab Spring dismantled." Kabalan contended that "regional and world powers have sponsored the return of military dictatorships to the region, with the hope that they would clean up the Arab Spring 'mess' and restore order." He also referred to Western powers' history of backing military rule in the region, and how American interests in the Middle East clashed with French and British ones. He cited the U.S.-supported coups in Syria and Egypt, but generally how, as former U.S. Secretary of State Condoleezza Rice admitted, the United States "pursued stability at the expense of democracy... and achieved neither." Kabalan concluded: There seems to be a concerted effort to establish a crescent of military-ruled countries from Sudan in northeast Africa to Algeria in the northwest through Egypt and Libya to ward off popular upheaval and keep "Islamist" forces in check. Analyst H. A. Hellyer attributes the persistence of autocracy and dictatorship, as well as counter-revolution, to structures that go back to colonialism – and also to the forms that states in the MENA region took in the postcolonial era and the social pacts established in the process. What we are seeing since 2011, Hellyer says, is a clash between those "inherited structures" and the new "demographic realities" of the populations in the region. Compromise and dialogue with the entrenched regimes, followed by elections in Tunisia and Egypt, have produced either limited change or counter-revolution. In the first quarter of 2019, protests and mass mobilisation in Sudan and Algeria succeeded in toppling the heads of state, but, as scholar and Woodrow Wilson Center fellow Marina Ottaway states, there is a dilemma: The demands of the genuine grassroots movements are unlikely "to be attained through a peaceful process – one without violence and the violation of the human rights of many." Ottaway points to the experiences of Algeria and Egypt; in the former, the regime annulled the results of the elections in the early 1990s, and in the latter, the military carried out a bloody repression of the Muslim Brotherhood government after the Brotherhood's own short-lived presidency was removed from office: Attempts to bring about radical changes, by punishing and excluding a large part of the old elite, are not possible by democratic means, because such efforts elicit a strong reaction – a counterrevolution – leading to violence and repression. By country See also Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Superplan] | [TOKENS: 145]
Contents Superplan Superplan was a high-level programming language developed between 1949 and 1951 by Heinz Rutishauser, the name being a reference to "Rechenplan" (i.e. computation plan), in Konrad Zuse's terminology designating a single Plankalkül program. The language was described in Rutishauser's 1951 publication Über automatische Rechenplanfertigung bei programmgesteuerten Rechenmaschinen (i.e. Automatically created Computation Plans for Program-Controlled Computing Machines). Superplan introduced the keyword für as for loop, which had the following form ( a i {\displaystyle a_{i}} being an array item): See also References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/Mikveh] | [TOKENS: 4251]
Contents Mikveh A mikveh (pronounced [/ˈmik.ve/]; Hebrew: מִקְוֶא, romanized: miqveʾ, lit. 'a gathering [of water]'; pl. mikve'ot or mikvot[a]) or mikvah (IPA: [/miqˈwaː/]) is a bath used during ritual immersion in Judaism to achieve ritual purity. In Orthodox Judaism, these regulations are steadfastly adhered to; consequently, the mikveh is central to an Orthodox Jewish community. Conservative Judaism also formally holds to the regulations. The existence of a mikveh is considered so important that, according to Halakha, a Jewish community is required to construct a kosher mikveh even before building a synagogue, and must go to the extreme of selling Torah scrolls, or even a synagogue if necessary, to provide funding for its construction. Etymology Formed from the Semitic root ק-ו-ה (q-w-h, 'collect'). In the Hebrew Bible, the word is employed in the sense of "collection", including in the phrase "וּלְמִקְוֵה הַמַּיִם" (ū·l·miqwê ha·m·má·yim, lit. 'And to the gathering of the waters') in Genesis 1:10, as well as in similar usages in Exodus 7:19, and Leviticus 1:36. Ben Sira, in the Jewish apocryphal Book of Sirach 10:13 (and at other points in the text), is the earliest author to use "מִקְוֶה" as a word for "pool",[citation needed] and the Mishnah is the earliest source to use it in the sense of "ritual bath". History There are no existing written records or archaeological evidence of specific Jewish ritual cleansing installations prior to the first century BCE. Mikvot first appear in the historical record in the 1st century BCE, and from that time, ancient mikvot are found across the Land of Israel and in historic Jewish communities worldwide. Hundreds of mikvot from the Second Temple period have been discovered so far across the Land of Israel, including in Jerusalem, Hebron, Masada, and Hannaton. The lack of dedicated mikvot before the 1st century BCE is notable, especially since many early Jews did follow purification laws, as shown by the accounts recorded in 1 Samuel 20:26 and 21:5; 2 Samuel 11:4; and 2 Chronicles 30:15 and 30:24, as well as the Elephantine papyri and ostraca. One suggestion is that Jews used natural water sources such as springs for immersion, rather than building dedicated mikvot. Alternatively, according to many Halakhic authorities, the prohibition on using pumped water for a mikveh is rabbinic, not biblical. Prior to the creation of such a rabbinic decree around 100 BCE,[dubious – discuss] Jews may have immersed in above-ground basins that were built as part of buildings, or affixed to the roofs of buildings, and filled manually. Such structures, dating to the First Temple period, have been discovered in ancient Ashdod and possibly in Dan. The reason for such a rabbinic decree may have been to distance the practice of ritual immersion from the culture of bathhouses, which spread through the region during the Hellenistic period. Requirements The traditional rules regarding the construction of a mikveh are based on those specified in classical rabbinic literature. Numerous biblical laws indicate that one must "bathe their flesh in water" to become purified from ritual impurity. The type of bathing is specified in Leviticus 11:36, which states that "a spring, or a cistern, a gathering (mikveh) of water" is a source of purity. A mikveh must be built into the ground or built as an essential part of a building. Portable receptacles, bathtubs, whirlpools, or jacuzzis cannot therefore function as mikvot. However, many Sephardic communities, as well as Ashkenazi Jews in America before World War 2, customarily allowed mikvehs to be filled using municipal water. Bans on such practices only became common in the US after an influx of European Ashkenazi rabbis, who saw the use of municipal water as too lenient. Some rabbis considered permitting spas to be used, but ultimately decided against it as it may encourage women to prefer warm water during immersion instead of prioritizing cold water. According to Rabbi Isaac Esrig, in 1957 most American mikvaot were filled using municipal water. Mikveh water must have collected naturally (bidei shamayim) rather than by human action. Thus, mikveh water must flow naturally to the mikveh from the source (rain or a spring). This essentially means that it must be supplied by gravity or a natural pressure gradient and cannot be pumped there by hand or carried. As a result, tap water cannot be used as the primary water source for a mikveh, although it can be used to top the water up to a desired level provided the minimum amount (40 seah) of ritually appropriate water is in the mikveh first; in practice, this means that for a pool of at least 80 seahs (approximately 1,150 litres) the majority of its volume can be tap water. The water is also forbidden to pass through any vessel which could hold water within it or is capable of becoming impure (anything made of metal); however, pipes open to the air at both ends are fine so long as there is no significant curvature). Frozen water (snow, ice and hail) is exceptional in that it may be used to fill the mikveh no matter how it was transported. Although not commonly accepted, at least one American Orthodox rabbi advocated a home mikveh using tap water, for those women who did not have access to a standard mikveh. As water flows through only pipes that open at both ends, the municipal and in-home plumbing would be construed as a non-vessel. So long as the pipes, hoses, and fittings are all freestanding and not held in the hand, they could be used to fill a mikveh receptacle that met all other requirements. The use of tap water for such a mikveh was controversial and was rejected by the majority of rabbinic authorities at the time and afterwards. The laws for a mikveh are slightly different from those of a spring. Mikveh water must be at rest, while spring water can still be flowing. Thus, flowing rivers may only be used for immersion when most of their water comes from springs, rather than rainfall or snowmelt. Seas may be used (even if waves are present). A mikveh must contain enough water to cover the entire body of an average-sized person; based on a mikveh with the dimensions of 3 cubits deep, 1 cubit wide, and 1 cubit long, the necessary volume of water was estimated as being 40 seah of water. The exact volume referred to by a seah is debated, and classical rabbinical literature specifies only that it is enough to fit 144 eggs; most Orthodox Jews use the stringent ruling of the Avrohom Yeshaya Karelitz, according to which one seah is 14.3 litres, and therefore a mikveh must contain at least some 575 litres. This volume of water can later be topped up with water from any source, but if there were less than 40 seahs of water in the mikveh to begin with, then the addition of 3 or more pints of water that did not meet the strict requirements would render the mikveh unfit for use, regardless of whether more water from a natural source was added later; a mikveh rendered unfit for use in this way would need to be completely drained away and refilled in the prescribed way. Inasmuch as water that collects naturally according to halachic prescriptions is hard to come by in urban areas, various methods are employed to establish a valid mikveh. One is that tap water is made to flow into a kosher mikveh and through a conduit into a larger pool in which users actually bathe. A second method is to create a mikveh in a deep pool, place a floor with holes over that and then fill the upper pool with tap water. In this way, it is considered as if the person dipping is actually "in" the pool of rain water. Additionally, the hashoko method involves using two pools: one filled with at least 40 seahs of natural water and one filled with tap water. A hole at least 5 cm (2 in) wide on the wall of the pool filled with tap water connects it to the pool filled with natural water. When these two collections of water touch, the tap water pool is okay to use for ritual immersion. Most contemporary mikvot are indoor constructions involving rainwater collected from a cistern and passed through a duct by gravity into an ordinary bathing pool; the mikveh can be heated to make the experience of bathing more comfortable, taking into account certain rules, often resulting in an environment not unlike a spa. Background Traditionally, the mikveh was used by both men and women to regain ritual purity after various events, according to regulations laid down in the Torah and in classical rabbinical literature. Cases where Jews commonly immerse in a mikveh nowadays, in order to fulfill a requirement of Torah or rabbinic law, include: Other cases where immersion in a mikveh would be required to become pure, but have not generally been practiced since destruction of the Temple (as a state of purity is generally not required outside the Temple), include: Customs exist to immerse in a mikveh in some of the following circumstances, with the customs varying by community: Immersion for men is more common in Hasidic communities, and done rarely in others, like German Jewish communities, where it is generally done only before the High Holidays. Requirements during use There is supposed to be no barrier between the person immersing and the water. The person should be wearing no clothes, jewelry, makeup, nail polish, fake nails, false eyelashes, contact lenses, or grooming products on the hair or skin. The person should carefully wash the hair and the body, removing calluses and dead skin prior. Some trim nails prior to immersion. Hair on the head and body is to be thoroughly combed, although exceptions are sometimes made for hair styled in dreadlocks. The mouth should be thoroughly cleaned and removable dental appliances are usually taken out. The person should carefully check their body after preparation, and sometimes an attendant will also check to ensure these requirements are met. Showering or bathing and carefully checking the whole body is, therefore, part of the religious requirements before entering the water of a mikveh. Although technically the requirements are the same for men and women, the common practice is that men do not go to great lengths to clean themselves before immersion since the immersion (with rare exceptions) is not Halakhically obligatory. According to rabbinical tradition, the hair counts as part of the body, and therefore water is required to touch all parts of it, meaning that braids cannot be worn during immersion. This has resulted in debate between the various ethnic groups within Judaism, about whether hair combing is necessary before immersion. The Ashkenazi community generally supports the view that hair must be combed straight so that there are no knots, but some take issue with this stance, particularly when it comes to dreadlocks.[citation needed] A number of rabbinical rulings argue in support of dreadlocks, on the basis that Modern practice Orthodox Judaism generally adheres to the classical regulations and traditions, and consequently Orthodox Jewish women are obligated to immerse in a mikveh between niddah and sexual relations with their husbands. This includes brides before their marriage, and married women after their menstruation period or childbirth. In accordance with Orthodox rules concerning modesty, men and women immerse in separate mikveh facilities in different locations, or else use the mikveh at different designated times. In a series of responsa in 2006, the Committee on Jewish Law and Standards of Conservative Judaism reaffirmed a requirement that Conservative women use a mikveh monthly following the end of the niddah period following menstruation, while adopting certain leniencies including reducing the length of the nidda period. The three responsa adopted permit a range of approaches from an opinion reaffirming the traditional ritual to an opinion declaring the concept of ritual purity does not apply outside the Temple in Jerusalem, proposing a new theological basis for the ritual, adapting new terminology including renaming the observances related to menstruation from taharat hamishpacha family purity to kedushat hamishpaha [family holiness] to reflect the view that the concept of ritual purity is no longer considered applicable, and adopting certain leniencies including reducing the length of the niddah period. Isaac Klein's A Guide to Jewish Religious Practice, a comprehensive guide frequently used within Conservative Judaism, also addresses Conservative views on other uses of a mikveh, but because it predates the 2006 opinions, it describes an approach more closely resembling the Orthodox one, and does not address the leniencies and views those opinions reflected. Rabbi Miriam Berkowitz's recent book Taking the Plunge: A Practical and Spiritual Guide to the Mikveh (Jerusalem: Schechter Institute, 2007) offers a comprehensive discussion of contemporary issues and new mikveh uses along with traditional reasons for observance, details of how to prepare and what to expect, and how the laws developed. Conservative Judaism encourages, but does not require, immersion before Jewish Holidays (including Yom Kippur), nor the immersion of utensils purchased from non-Jews. New uses are being developed throughout the liberal world for healing (after rape, incest, divorce, etc.) or celebration (milestone birthdays, anniversaries, ordination, or reading Torah for the first time). As in Orthodox Judaism, converts to Judaism through the Conservative movement are required to immerse themselves in a mikveh. Two Jews must witness the event, at least one of which must actually see the immersion. Immersion into a mikveh has been described as a very emotional, life-changing experience similar to a graduation. Reform and Reconstructionist Judaism do not hold the halachic requirements of mikveh the way Conservative and Orthodox Judaism do, but some Reform and Reconstructionist rabbis recommend a mikveh ceremony. However, there are growing trends toward using mikveh for conversions, wedding preparation, and even before holidays. In the 21st century the mikveh is experiencing[according to whom?] a revival among progressive Jews who view immersion as a way to mark transitions in their lives.[citation needed] By 2001, the Central Conference of American Rabbis began to recommend a mikveh ceremony for converts. "Open" mikvot welcome Jews to consider immersion for reasons not necessarily required by Jewish law; they might immerse following a divorce or medical treatment, to find closure after an abortion, or to celebrate a life transition, among other reasons. Progressive Jews may also use the mikveh for conversion. In more recent times, many transgender Jews have begun to use the practice of mikveh immersion to mark milestones in their gender transition. For example, Mayyim Hayyim, an organization in Newton, Massachusetts, has collaborated with Keshet to actively create a mikveh space accessible and inclusive to transgender Jews. The organization has a growing library of ceremonies that includes a ceremony for transition milestones, adapted from blessings written by the transgender Rabbi Elliot Rose Kukla. Interpretations Rabbi Aryeh Kaplan connects the laws of impurity to the narrative in the beginning of Genesis. According to Genesis, by eating of the fruit, Adam and Eve had brought death into the world. Kaplan points out that most of the laws of impurity relate to some form of death (or in the case of niddah the loss of a potential life). One who comes into contact with one of the forms of death must then immerse in water which is described in Genesis as flowing out of the Garden of Eden (the source of life) in order to cleanse oneself of this contact with death (and by extension of sin). According to Rabbi Abraham Isaac Kook, by immersing in the mikveh, "we are forced to recognize our existential estrangement from the physical universe. How long can we survive under water? The experience of submerging drives home the realization that our existence in this world is transient, and we should strive towards more lasting goals." A custom exists to read the seventh chapter of the Mikvaot tractate of the Mishnah following a funeral. This tractate covers the laws of the mikveh, and the seventh chapter starts with a discussion of substances which can be used as valid water sources for a mikveh—snow, hail, frost, ice, salt, and pourable mud. This alludes to the belief in resurrection, as "living water" in a lifeless frozen state (as ice) can still become living water again (after melting). The word mikveh makes use of the same root letters in Hebrew as the word for "hope", and this has served as the basis for homiletical comparison of the two concepts in both biblical and rabbinic literature. For instance, in the Book of Jeremiah, the word mikveh is used in the sense of "hope", but at the same time also associated with "living water": O Hashem, the Hope [mikveh] of Israel, all who forsake you will be ashamed... because they have forsaken Hashem, the fountain of living water Are there any of the worthless idols of the nations, that can cause rain? or can the heavens give showers? Is it not you, Hashem our God, and do we not hope [nekaveh] in you? For you have made all these things. In the Mishnah, following on from a discussion of Yom Kippur, Rabbi Akiva compares mikveh immersion to the relationship between God and Israel. Akiva refers to the description of God in the Book of Jeremiah as the "Mikveh of Israel", and suggests that "just as a mikveh purifies the contaminated, so does the Holy One, blessed is he, purify Israel". Controversies The Reform Movement's Israel Religious Action Center sued the state on behalf of the Reform and Conservative/Masorti movements to allow members to use publicly funded mikvot. The case, which took ten years to resolve, resulted in the Israeli Supreme Court ruling that public ritual baths must accept all prospective converts to Judaism, including converts to Reform and Conservative Judaism. In his 2016 ruling, Supreme Court Justice Elyakim Rubinstein said barring certain converts amounts to discrimination. Until this ruling, Orthodox officials barred non-Orthodox converts from using any mikveh, as their traditions do not technically[clarification needed] conform to Jewish law, and the people they convert are therefore not technically Jews. Rubinstein noted: "Once it established public mikvahs, and put them at the service of the public—including for the process of conversion—the State cannot but be even-handed in allowing their use." He also said. "The State of Israel is free to supervise the use of its mikvahs, so long as it does so in an egalitarian manner." In 2013, the Israeli Center for Women's Justice and Kolech, an organization committed to Orthodox Jewish feminism, petitioned the Supreme Court to forbid attendants from asking intrusive questions of women at state-funded and -operated mikvot. In response, the Chief Rabbinate said it would forbid questioning of women about their marital status before immersion. The complaint had charged that the practice represented unacceptable discrimination. In 2015, however, the ITIM Advocacy Center filed a complaint with the Israeli Supreme Court on behalf of 13 Orthodox women against the Chief Rabbinate and the Jerusalem Religious Council, insisting that women be allowed to use the mikvah "according to their personal customs and without supervision, or with their own attendant if they wish". The complaint charged that the Chief Rabbinate is ignoring directives passed in 2013 that allow women to use the mikvah facilities without being asked intrusive questions by attendants. In June 2016, the Chief Rabbinate agreed to allow women to use a mikveh without an attendant. See also Notes References Bibliography External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Nadir_of_American_race_relations] | [TOKENS: 4077]
Contents Nadir of American race relations The nadir of American race relations is a historical period defined by Rayford Logan as encompassing the worst time for race relations in the United States after the Civil War, which ended slavery. This period coincided with the Gilded Age, and includes the legal solidification of Jim Crow laws after the Reconstruction era, as well as the rise of lynchings and racial massacres. Its exact date range is not uniform amongst historians. Logan determined in his 1954 book The Negro in American Life and Thought: The Nadir, 1877–1901 as the period when "the Negro's status in American society" reached its lowest point. He argued for 1901 as its end, suggesting that race relations improved after that year; other historians, such as John Hope Franklin and Henry Arthur Callis, argued for dates as late as 1923. References to a nadir continued to be used; most notably, it is used in books by James W. Loewen as recently as 2006, and it is also used in books by other scholars. Loewen chooses later dates, arguing that the post-Reconstruction era was in fact one of widespread hope for racial equity due to idealistic Northern support for civil rights. In Loewen's view, the true nadir only began when Northern Republicans ceased supporting Southern Blacks' rights around 1890, and it lasted until the American entry into World War II in 1941. This period followed the financial Panic of 1873 and a continuing decline in cotton prices. It overlapped with both the Gilded Age and the Progressive Era, and was characterized by the nationwide sundown town phenomenon. Logan's focus was exclusively on African Americans in the Southern United States, but the time period which he covered also represents the worst period of anti-Chinese discrimination and wider anti-Asian discrimination which was due to fear of the so-called Yellow Peril, which included harassment and violence on the West Coast of the United States and the destruction of Chinatown, Denver. Background In the early part of the 20th century, some white historians put forth the claim that Reconstruction was a tragic period, when Republicans who were motivated by revenge and profit used troops to force Southerners to accept corrupt governments that were run by unscrupulous Northerners and unqualified Blacks. Such scholars generally dismissed the idea that Black people could ever be capable of governing societies. Notable proponents of this view were referred to as the Dunning School, named after William Archibald Dunning, an influential historian at Columbia University. Another Columbia professor, John Burgess, was notorious for writing that "black skin means membership in a race of men which has never of itself ... created any civilization of any kind." The Dunning School's view of Reconstruction held sway for years. It was represented in D. W. Griffith's popular movie The Birth of a Nation (1915) and to some extent, it was also represented in Margaret Mitchell's novel Gone with the Wind (1934). More recent historians of the period have rejected many of the Dunning School's conclusions, and in their place, they offer a different assessment. Today's consensus regards Reconstruction as a time of idealism and hope, a time which was marked by some practical achievements. The Radical Republicans who passed the Fourteenth and Fifteenth Amendments were, for the most part, motivated by a desire to help freedmen. African American historian W. E. B. Du Bois put this view forward in 1910, and later historians Kenneth Stampp and Eric Foner expanded it. The Republican Reconstruction governments had their share of corruption, but they benefited many whites, and were no more corrupt than Democratic governments or Northern Republican governments. Furthermore, the Reconstruction governments established public education and social welfare institutions for the first time, improving education for both Blacks and whites, and they also tried to improve social conditions for the many people who were left in poverty after the long war. No Reconstruction state government was dominated by Blacks; in fact, Blacks did not attain a level of representation that was equal to the size of their population in any state. Origins For several years after the Civil War, the federal government, pushed by Northern opinion, showed that it was willing to intervene to protect the rights of Black Americans. There were limits, however, to Republican efforts on behalf of Blacks: In Washington, a proposal of land reform made by the Freedmen's Bureau, which would have granted Blacks plots on the plantation land (forty acres and a mule) on which they worked, never came to pass. In the South, many former Confederates were stripped of the right to vote, but they resisted Reconstruction with violence and intimidation. James Loewen notes that, between 1865 and 1867, when white Democrats controlled the government, whites murdered an average of one Black person every day in Hinds County, Mississippi. Black schools were especially targeted: School buildings frequently were burned, and teachers were flogged and occasionally murdered. The postwar terrorist group Ku Klux Klan (KKK) acted with significant local support, attacking freedmen and their white allies; the group largely was suppressed by federal efforts under the Enforcement Acts of 1870–1871, but did not disappear, and it had a resurgence in the early 20th century. Despite these failures, Blacks continued to vote and attend school. Literacy soared, and many African Americans were elected to local and statewide offices, with several serving in Congress. Because of the Black community's commitment to education, the majority of Blacks were literate by 1900. Continued violence in the South, especially heated around electoral campaigns, sapped Northern intentions. More significantly, after the long years and losses of the Civil War, Northerners had lost heart for the massive commitment of money and arms that would have been required to stifle the white insurgency. The financial panic of 1873 disrupted the economy nationwide, causing more difficulties. The white insurgency took on new life ten years after the war. Conservative white Democrats waged an increasingly violent campaign, with the Colfax and Coushatta massacres in Louisiana in 1873 as signs. The next year saw the formation of paramilitary groups, such as the White League in Louisiana (1874) and Red Shirts in Mississippi and the Carolinas, that worked openly to turn Republicans out of office, disrupt Black organizing, and intimidate and suppress Black voting. They invited press coverage. One historian described them as "the military arm of the Democratic Party." In 1874, in a continuation of the disputed gubernatorial election of 1872, thousands of White League militiamen fought against New Orleans police and Louisiana state militia and won. They turned out the Republican governor and installed the Democrat Samuel D. McEnery, took over the capitol, state house and armory for a few days, and then retreated in the face of Federal troops. This was known as the "Battle of Liberty Place". Northerners waffled and finally capitulated to the South, giving up on being able to control election violence. Abolitionist leaders like Horace Greeley began to ally themselves with Democrats in attacking Reconstruction governments. By 1875, there was a Democratic majority in the House of Representatives. President Ulysses S. Grant, who as a general had led the Union to victory in the Civil War, initially refused to send troops to Mississippi in 1875 when the governor of the state asked him to. Violence surrounded the presidential election of 1876 in many areas, beginning a trend. After Grant, it would be many years before any President would do anything to extend the protection of the law to Black people. Jim Crow laws and terrorism "Believing that the Constitution of the United States contemplated a government to be carried on by an enlightened people; believing that its framers did not anticipate the enfranchisement of an ignorant population of African origin, and believing that those men of the State of North Carolina, who joined in forming the Union, did not contemplate for their descendants a subjection to an inferior race, "We, the undersigned citizens of the city of Wilmington and county of New Hanover, do hereby declare that we will no longer be ruled, and will never again be ruled, by men of African origin ...." As noted above, white paramilitary forces contributed to whites' taking over power in the late 1870s. A brief coalition of populists took over in some states, but Democrats had returned to power after the 1880s. From 1890 to 1908, they proceeded to pass legislation and constitutional amendments to disenfranchise most Blacks and many poor whites, with Mississippi and Louisiana creating new state constitutions in 1890 and 1895 respectively, to disenfranchise African Americans. Democrats used a combination of restrictions on voter registration and voting methods, such as poll taxes, literacy and residency requirements, and ballot box changes. The main push came from elite Democrats in the Solid South, where Blacks were a majority of voters. The elite Democrats also acted to disenfranchise poor whites. African Americans were an absolute majority of the population in Louisiana, Mississippi and South Carolina, and represented more than 40% of the population in four other former Confederate states. Accordingly, many whites perceived African Americans as a major political threat, because in free and fair elections, they would hold the balance of power in a majority of the South. South Carolina US Senator Ben Tillman proudly proclaimed in 1900, "We have done our level best [to prevent blacks from voting] ... we have scratched our heads to find out how we could eliminate the last one of them. We stuffed ballot boxes. We shot them. We are not ashamed of it." Conservative white Democratic governments passed Jim Crow legislation, creating a system of legal racial segregation in public and private facilities. Blacks were separated in schools and the few hospitals, were restricted in seating on trains, and had to use separate sections in some restaurants and public transportation systems. They were often barred from some stores, or forbidden to use lunchrooms, restrooms and fitting rooms. Because they could not vote, they could not serve on juries, which meant they had little if any legal recourse in the system. Between 1889 and 1922, as political disenfranchisement and segregation were being established, the National Association for the Advancement of Colored People (NAACP) calculates lynchings reached their worst level in history. Almost 3,500 people fell victim to lynching, almost all of them Black men. Historian James Loewen notes that lynching emphasized the powerlessness of Blacks: "the defining characteristic of a lynching is that the murder takes place in public, so everyone knows who did it, yet the crime goes unpunished." African American civil rights activist Ida Bell Wells-Barnett conducted one of the first systematic studies of the subject. She documented that the most prevalent accusation against lynching victims was murder or attempted murder. She found Blacks were "lynched for anything or nothing" – for wife-beating, stealing hogs, being "saucy to white people", sleeping with a consenting white woman—for being in the wrong place at the wrong time. Blacks who were economically successful faced reprisals or sanctions. When Richard Wright tried to train to become an optometrist and lens-grinder, the other men in the shop threatened him until he was forced to leave. In 1911, Blacks were barred from participating in the Kentucky Derby because African Americans won more than half of the first twenty-eight races. Through violence and legal restrictions, whites often prevented Blacks from working as common laborers, much less as skilled artisans or in the professions. Under such conditions, even the most ambitious and talented Black person found it extremely difficult to advance. This situation called the views of Booker T. Washington, the most prominent Black leader during the early part of the nadir into question. He had argued that Black people could better themselves by doing hard work and being thrifty. He believed that they had to master basic work before they went on to pursue college careers and professional aspirations. Washington believed that his programs trained Blacks for the lives which they were likely to lead as well as for the jobs which they could get in the South. Washington's position was advanced in his influential Atlanta Exposition Speech in 1895, which was the genesis of the Atlanta Compromise. W. E. B. Du Bois advocated a more uncompromising position than Washington, stating: He is striving nobly to make Negro artisans business men and property-owners; but it is utterly impossible, under modern competitive methods, for workingmen and property-owners to defend their rights and exist without the right of suffrage. Washington had always (though often clandestinely) supported the right of black suffrage, and had fought against disfranchisement laws in Georgia, Louisiana, and other Southern states. This included secretive funding of litigation resulting in Giles v. Harris, 189 U.S. 475 (1903), which was lost due to Supreme Court reluctance to interfere with states' rights. Great Migration and national hostility Many Blacks left the South in an attempt to find better living and working conditions. Logan notes that in the spring of 1879, "some 40,000 Negroes virtually stampeded from Mississippi, Louisiana, Alabama, and Georgia for the Midwest. The largest number fled to Kansas". More significantly, beginning in about 1915, many Blacks moved to Northern cities in what became known as the Great Migration. Through the 1930s, more than 1.5 million Blacks would leave the South for better lives in the North, seeking work and the chance to escape from lynchings and legal segregation. While they faced difficulties, overall, they had better chances in the North. They had to make great cultural changes, as most went from rural areas to major industrial cities, and they also had to adjust from being rural workers to being urban workers. As an example, in its years of expansion, the Pennsylvania Railroad recruited tens of thousands of workers from the South. In the South, alarmed whites, worried that their labor force was leaving, often tried to prevent Black migration.[citation needed] Black Americans who fled racial oppression either returned to retrieve the rest of their family or sent train tickets back home. In response, as white southerners observed train platforms packed with African Americans, several cities passed ordinances that made it illegal for trains to accept pre-paid tickets. There were ordinances put in place to also prevent group travel of Black families or clusters of African Americans who tried to purchase group rates. During the nadir, Northern areas struggled with upheaval and hostility. In the Midwest and West, many towns posted "sundown" warnings, threatening to kill African Americans who remained overnight. These "sundown" towns also expelled African Americans who had settled in those towns both before and during Reconstruction. Monuments to Confederate War dead were erected across the nation – as far away as in Montana, for example. Black housing was often segregated in the North. There was competition for jobs and housing as Blacks entered cities which were also the destination of millions of immigrants from eastern and southern Europe. As more Blacks moved north, they encountered racism where they had to battle over territory, often against Irish American communities, including in support of local political power bases. In some regions, Blacks could not serve on juries. Blackface shows, in which whites dressed as Blacks portrayed African Americans as ignorant clowns, were popular in North and South. The Supreme Court reflected conservative tendencies and did not overrule Southern constitutional changes resulting in disfranchisement. In 1896, the Court ruled in Plessy v. Ferguson that "separate but equal" facilities for Blacks were constitutional; the Court was made up almost entirely of Northerners. However, equal facilities were rarely provided, as there was no state or federal legislation requiring them. It would not be until 58 years later, with Brown v. Board of Education (1954), that the Court overruled the decision. While there were critics in the scientific community such as Franz Boas, eugenics and scientific racism were promoted in academia by many scientists, like Lothrop Stoddard and Madison Grant, who argued "scientific evidence" for the racial superiority of whites and thereby worked to justify racial segregation and second-class citizenship for Blacks. Numerous Black people had voted for Democrat Woodrow Wilson in the 1912 election, based on his promise to work for them. Instead, he segregated government workplaces and employment in some agencies. The film The Birth of a Nation (1915), which celebrated the original Ku Klux Klan, was shown at the White House to President Wilson and his cabinet members. Writing in 1921 to Joseph Tumulty, Wilson said of the film "I have always felt that this was a very unfortunate production and I wish most sincerely that its production might be avoided, particularly in communities where there are so many colored people."[page needed] The Birth of a Nation resulted in the rebirth of the Klan, which in the 1920s had more power and influence than the original Klan ever did. In 1924, the Klan had four million members. It also controlled the governorship and a majority of the state legislature in Indiana, and exerted a powerful political influence in Arkansas, Oklahoma, California, Georgia, Oregon, and Texas. In the years during and after World War I there were great social tensions in the nation. In addition to the Great Migration and immigration from Europe, African American Army veterans, newly demobilized, sought jobs, and as trained soldiers, were less likely to acquiesce to discrimination. Massacres and attacks on Blacks that developed out of strikes and economic competition occurred in Houston, Philadelphia, and East St. Louis in 1917. In 1919, there were so many violent attacks in several major cities that the summer of that year became known as Red Summer. The Chicago race riot of 1919 erupted into mob violence for several days. It left 15 whites and 23 Blacks dead, over 500 injured and more than 1,000 homeless. An investigation found that ethnic Irish, who had established their own power base earlier on the South Side, were heavily implicated in the riots. The 1921 Tulsa race massacre in Tulsa, Oklahoma, was even more deadly; white mobs invaded and burned the Greenwood district of Tulsa; 1,256 homes were destroyed and 39 people (26 Black, 13 white) were confirmed killed, although recent investigations suggest that the number of Black deaths could be considerably higher. Legacy Black literacy levels, which rose during Reconstruction, continued to increase through this period. The NAACP was established in 1909, and by 1920 the group won a few important anti-discrimination lawsuits. African Americans, such as Du Bois and Wells-Barnett, continued the tradition of advocacy, organizing, and journalism which helped spur abolitionism, and also developed new tactics that helped to spur the civil rights movement of the 1950s and 1960s. The Harlem Renaissance and the popularity of jazz music during the early part of the 20th century made many Americans more aware of Black culture and more accepting of Black celebrities. Overall, however, the nadir was a disaster, certainly for Black people. Foner points out, by the early twentieth century [racism] had become more deeply embedded in the nation's culture and politics than at any time since the beginning of the antislavery crusade and perhaps in our nation's entire history. Similarly, Loewen argues that the family instability and crime which many sociologists have found in Black communities can be traced, not to slavery, but to the nadir and its aftermath. Foner noted that "none of Reconstruction's black officials created a family political dynasty" and concluded that the nadir "aborted the development of the South's black political leadership." See also References Sources Further reading Additional resources
========================================
[SOURCE: https://en.wikipedia.org/wiki/Epidemiology] | [TOKENS: 6522]
Contents Epidemiology Epidemiology is the study and analysis of the distribution (who, when, and where), patterns and determinants of health and disease conditions in a defined population, and application of this knowledge to prevent diseases. It is a cornerstone of public health, and shapes policy decisions and evidence-based practice by identifying risk factors for disease and targets for preventive healthcare. Epidemiologists help with study design, collection, and statistical analysis of data, amend interpretation and dissemination of results (including peer review and occasional systematic review). Epidemiology has helped develop methodology used in clinical research, public health studies, and, to a lesser extent, basic research in the biological sciences. Major areas of epidemiological study include disease causation, transmission, outbreak investigation, disease surveillance, environmental epidemiology, forensic epidemiology, occupational epidemiology, screening, biomonitoring, and comparisons of treatment effects such as in clinical trials. Epidemiologists rely on other scientific disciplines like biology to better understand disease processes, statistics to make efficient use of the data and draw appropriate conclusions, social sciences to better understand proximate and distal causes, and engineering for exposure assessment. Epidemiology, literally meaning "the study of what is upon the people", is derived from Greek epi 'upon, among' demos 'people, district' and logos 'study, word, discourse', suggesting that it applies only to human populations. However, the term is widely used in studies of zoological populations (veterinary epidemiology), although the term "epizoology" is available, and it has also been applied to studies of plant populations (botanical or plant disease epidemiology). The distinction between "epidemic" and "endemic" was first drawn by Hippocrates, The term "epidemiology" appears to have first been used to describe the study of epidemics in 1802 by the Spanish physician Joaquín de Villalba [es] in Epidemiología Española. Epidemiologists also study the interaction of diseases in a population, a condition known as a syndemic. The term epidemiology is now widely applied to cover the description and causation of not only epidemic, infectious disease, but of disease in general, including related conditions and, especially since the 20th century, chronic diseases such as diabetes, cardiovascular disease, and cancer. Some examples of topics examined through epidemiology include as high blood pressure, mental illness and obesity. Therefore, this epidemiology is based upon how the pattern of the disease causes change in the function of human beings. History The Greek physician Hippocrates, taught by Democritus, was known as the father of medicine, sought a logic to sickness; he is the first person known to have examined the relationships between the occurrence of disease and environmental influences. Hippocrates believed sickness of the human body to be caused by an imbalance of the four humors (black bile, yellow bile, blood, and phlegm). The cure to the sickness was to remove or add the humor in question to balance the body. This belief led to the application of bloodletting and dieting in medicine. He coined the terms endemic (for diseases usually found in some places but not in others) and epidemic (for diseases that are seen at some times but not others). In the middle of the 16th century, a doctor from Verona named Girolamo Fracastoro was the first to propose a theory that the very small, unseeable, particles that cause disease were alive. They were considered to be able to spread by air, multiply by themselves and to be destroyable by fire. In this way he refuted Galen's miasma theory (poison gas in sick people). In 1543 he wrote a book De contagione et contagiosis morbis, in which he was the first to promote personal and environmental hygiene to prevent disease. The development of a sufficiently powerful microscope by Antonie van Leeuwenhoek in 1675 provided visual evidence of living particles consistent with a germ theory of disease. During the Ming dynasty, Wu Youke (1582–1652) developed the idea that some diseases were caused by transmissible agents, which he called Li Qi (戾气 or pestilential factors) when he observed various epidemics rage around him between 1641 and 1644. His book Wen Yi Lun (瘟疫论, Treatise on Pestilence/Treatise of Epidemic Diseases) can be regarded as the main etiological work that brought forward the concept. His concepts were still being considered in analysing SARS outbreak by WHO in 2004 in the context of traditional Chinese medicine. Another pioneer, Thomas Sydenham (1624–1689), was the first to distinguish the fevers of Londoners in the later 1600s. His theories on cures of fevers met with much resistance from traditional physicians at the time. He was not able to find the initial cause of the smallpox fever he researched and treated. John Graunt, a haberdasher and amateur statistician, published Natural and Political Observations ... upon the Bills of Mortality in 1662. In it, he analysed the mortality rolls in London before the Great Plague, presented one of the first life tables, and reported time trends for many diseases, new and old. He provided statistical evidence for many theories on disease, and also refuted some widespread ideas on them. John Snow is famous for his investigations into the causes of the 19th-century cholera epidemics, and is also known as the father of (modern) Epidemiology. He began with noticing the significantly higher death rates in two areas supplied by Southwark Company. His identification of the Broad Street pump as the cause of the Soho epidemic is considered the classic example of epidemiology. Snow used chlorine in an attempt to clean the water and removed the handle; this ended the outbreak. This has been perceived as a major event in the history of public health and regarded as the founding event of the science of epidemiology, having helped shape public health policies around the world. However, Snow's research and preventive measures to avoid further outbreaks were not fully accepted or put into practice until after his death due to the prevailing Miasma Theory of the time, a model of disease in which poor air quality was blamed for illness. This was used to rationalize high rates of infection in impoverished areas instead of addressing the underlying issues of poor nutrition and sanitation, and was proven false by his work. Other pioneers include Danish physician Peter Anton Schleisner, who in 1849 related his work on the prevention of the epidemic of neonatal tetanus on the Vestmanna Islands in Iceland. Another important pioneer was Hungarian physician Ignaz Semmelweis, who in 1847 brought down infant mortality at a Vienna hospital by instituting a disinfection procedure. His findings were published in 1850, but his work was ill-received by his colleagues, who discontinued the procedure. Disinfection did not become widely practiced until British surgeon Joseph Lister, aided by his college, chemist Thomas Anderson, was able to "discover" antiseptics in 1865 based on the earlier work of Louis Pasteur. In the early 20th century, mathematical methods were introduced into epidemiology by Ronald Ross, Janet Lane-Claypon, Anderson Gray McKendrick, and others. In a parallel development during the 1920s, German-Swiss pathologist Max Askanazy and others founded the International Society for Geographical Pathology to systematically investigate the geographical pathology of cancer and other non-infectious diseases across populations in different regions. After World War II, Richard Doll and other non-pathologists joined the field and advanced methods to study cancer, a disease with patterns and mode of occurrences that could not be suitably studied with the methods developed for epidemics of infectious diseases. Geography pathology eventually combined with infectious disease epidemiology to make the field that is epidemiology today. Another breakthrough was the 1954 publication of the results of a British Doctors Study, led by Richard Doll and Austin Bradford Hill, which lent very strong statistical support to the link between tobacco smoking and lung cancer.[citation needed] In the late 20th century, with the advancement of biomedical sciences, a number of molecular markers in blood, other biospecimens and environment were identified as predictors of development or risk of a certain disease. Epidemiology research to examine the relationship between these biomarkers analyzed at the molecular level and disease was broadly named "molecular epidemiology". Specifically, "genetic epidemiology" has been used for epidemiology of germline genetic variation and disease. Genetic variation is typically determined using DNA from peripheral blood leukocytes.[citation needed] Since the 2000s, genome-wide association studies (GWAS) have been commonly performed to identify genetic risk factors for many diseases and health conditions. While most molecular epidemiology studies are still using conventional disease diagnosis and classification systems, it is increasingly recognized that disease progression represents inherently heterogeneous processes differing from person to person. Conceptually, each individual has a unique disease process different from any other individual ("the unique disease principle"), considering uniqueness of the exposome (a totality of endogenous and exogenous / environmental exposures) and its unique influence on molecular pathologic process in each individual. Studies to examine the relationship between an exposure and molecular pathologic signature of disease (particularly cancer) became increasingly common throughout the 2000s. However, the use of molecular pathology in epidemiology posed unique challenges, including lack of research guidelines and standardized statistical methodologies, and paucity of interdisciplinary experts and training programs. Furthermore, the concept of disease heterogeneity appears to conflict with the long-standing premise in epidemiology that individuals with the same disease name have similar etiologies and disease processes. To resolve these issues and advance population health science in the era of molecular precision medicine, "molecular pathology" and "epidemiology" was integrated to create a new interdisciplinary field of "molecular pathological epidemiology" (MPE), defined as "epidemiology of molecular pathology and heterogeneity of disease". In MPE, investigators analyze the relationships between (A) environmental, dietary, lifestyle and genetic factors; (B) alterations in cellular or extracellular molecules; and (C) evolution and progression of disease. A better understanding of heterogeneity of disease pathogenesis will further contribute to elucidate etiologies of disease. The MPE approach can be applied to not only neoplastic diseases but also non-neoplastic diseases. The concept and paradigm of MPE have become widespread in the 2010s.[excessive citations] By 2012, it was recognized that many pathogens' evolution is rapid enough to be highly relevant to epidemiology, and that therefore much could be gained from an interdisciplinary approach to infectious disease integrating epidemiology and molecular evolution to "inform control strategies, or even patient treatment." Modern epidemiological studies can use advanced statistics and machine learning to create predictive models as well as to define treatment effects. There is increasing recognition that a wide range of modern data sources, many not originating from healthcare or epidemiology, can be used for epidemiological study. Such digital epidemiology can include data from internet searching, mobile phone records and retail sales of drugs.[citation needed] Types of studies Epidemiologists employ a range of study designs from the observational to experimental and generally categorized as descriptive (involving the assessment of data covering time, place, and person), analytic (aiming to further examine known associations or hypothesized relationships), and experimental (a term often equated with clinical or community trials of treatments and other interventions). In observational studies, nature is allowed to "take its course", as epidemiologists observe from the sidelines. Conversely, in experimental studies, the epidemiologist is the one in control of all of the factors entering a certain case study. Epidemiological studies are aimed, where possible, at revealing unbiased relationships between exposures such as alcohol or smoking, biological agents, stress, or chemicals to mortality or morbidity. The identification of causal relationships between these exposures and outcomes is an important aspect of epidemiology. Modern epidemiologists use informatics and infodemiology as tools.[citation needed] Observational studies have two components, descriptive and analytical. Descriptive observations pertain to the "who, what, where and when of health-related state occurrence". However, analytical observations deal more with the 'how' of a health-related event. Experimental epidemiology contains three case types: randomized controlled trials (often used for a new medicine or drug testing), field trials (conducted on those at a high risk of contracting a disease), and community trials (research on social originating diseases). The term 'epidemiologic triad' is used to describe the intersection of Host, Agent, and Environment in analyzing an outbreak. ===\when they are unexposed. The former type of study is purely descriptive and cannot be used to make inferences about the general population of patients with that disease. These types of studies, in which an astute clinician identifies an unusual feature of a disease or a patient's history, may lead to a formulation of a new hypothesis. Using the data from the series, analytic studies could be done to investigate possible causal factors. These can include case-control studies or prospective studies. A case-control study would involve matching comparable controls without the disease to the cases in the series. A prospective study would involve following the case series over time to evaluate the disease's natural history. The latter type, more formally described as self-controlled case-series studies, divide individual patient follow-up time into exposed and unexposed periods and use fixed-effects Poisson regression processes to compare the incidence rate of a given outcome between exposed and unexposed periods. This technique has been extensively used in the study of adverse reactions to vaccination and has been shown in some circumstances to provide statistical power comparable to that available in cohort studies.[citation needed] Case-control studies select subjects based on their disease status. It is a retrospective study. A group of individuals that are disease positive (the "case" group) is compared with a group of disease negative individuals (the "control" group). The control group should ideally come from the same population that gave rise to the cases. The case-control study looks back through time at potential exposures that both groups (cases and controls) may have encountered. A 2×2 table is constructed, displaying exposed cases (A), exposed controls (B), unexposed cases (C) and unexposed controls (D). The statistic generated to measure association is the odds ratio (OR), which is the ratio of the odds of exposure in the cases (A/C) to the odds of exposure in the controls (B/D), i.e. OR = (AD/BC).[citation needed] If the OR is significantly greater than 1, then the conclusion is "those with the disease are more likely to have been exposed", whereas if it is close to 1 then the exposure and disease are not likely associated. If the OR is far less than one, then this suggests that the exposure is a protective factor in the causation of the disease. Case-control studies are usually faster and more cost-effective than cohort studies but are sensitive to bias (such as recall bias and selection bias). The main challenge is to identify the appropriate control group; the distribution of exposure among the control group should be representative of the distribution in the population that gave rise to the cases. This can be achieved by drawing a random sample from the original population at risk. This has as a consequence that the control group can contain people with the disease under study when the disease has a high attack rate in a population.[citation needed] A major drawback for case control studies is that, in order to be considered to be statistically significant, the minimum number of cases required at the 95% confidence interval is related to the odds ratio by the equation: where N is the ratio of cases to controls. As the odds ratio approaches 1, the number of cases required for statistical significance grows towards infinity; rendering case-control studies all but useless for low odds ratios. For instance, for an odds ratio of 1.5 and cases = controls, the table shown above would look like this: For an odds ratio of 1.1: Cohort studies select subjects based on their exposure status. The study subjects should be at risk of the outcome under investigation at the beginning of the cohort study; this usually means that they should be disease free when the cohort study starts. The cohort is followed through time to assess their later outcome status. An example of a cohort study would be the investigation of a cohort of smokers and non-smokers over time to estimate the incidence of lung cancer. The same 2×2 table is constructed as with the case control study. However, the point estimate generated is the relative risk (RR), which is the probability of disease for a person in the exposed group, Pe = A / (A + B) over the probability of disease for a person in the unexposed group, Pu = C / (C + D), i.e. RR = Pe / Pu. As with the OR, a RR greater than 1 shows association, where the conclusion can be read "those with the exposure were more likely to develop the disease." Prospective studies have many benefits over case control studies. The RR is a more powerful effect measure than the OR, as the OR is just an estimation of the RR, since true incidence cannot be calculated in a case control study where subjects are selected based on disease status. Temporality can be established in a prospective study, and confounders are more easily controlled for. However, they are more costly, and there is a greater chance of losing subjects to follow-up based on the long time period over which the cohort is followed. Cohort studies also are limited by the same equation for number of cases as for cohort studies, but, if the base incidence rate in the study population is very low, the number of cases required is reduced by 1⁄2. Causal inference Although epidemiology is sometimes viewed as a collection of statistical tools used to elucidate the associations of exposures to health outcomes, a deeper understanding of this science is that of discovering causal relationships. "Correlation does not imply causation" is a common theme for much of the epidemiological literature. For epidemiologists, the key is in the term inference. Correlation, or at least association between two variables, is a necessary but not sufficient criterion for the inference that one variable causes the other. Epidemiologists use gathered data and a broad range of biomedical and psychosocial theories in an iterative way to generate or expand theory, to test hypotheses, and to make educated, informed assertions about which relationships are causal, and about exactly how they are causal. Epidemiologists emphasize that the "one cause – one effect" understanding is a simplistic mis-belief. Most outcomes, whether disease or death, are caused by a chain or web consisting of many component causes. Causes can be distinguished as necessary, sufficient or probabilistic conditions. If a necessary condition can be identified and controlled (e.g., antibodies to a disease agent, energy in an injury), the harmful outcome can be avoided (Robertson, 2015). One tool regularly used to conceptualize the multicausality associated with disease is the causal pie model. In 1965, Austin Bradford Hill proposed a series of considerations to help assess evidence of causation, which have come to be commonly known as the "Bradford Hill criteria". In contrast to the explicit intentions of their author, Hill's considerations are now sometimes taught as a checklist to be implemented for assessing causality. Hill himself said "None of my nine viewpoints can bring indisputable evidence for or against the cause-and-effect hypothesis and none can be required sine qua non." Epidemiological studies can only go to prove that an agent could have caused, but not that it did cause, an effect in any particular case: Epidemiology is concerned with the incidence of disease in populations and does not address the question of the cause of an individual's disease. This question, sometimes referred to as specific causation, is beyond the domain of the science of epidemiology. Epidemiology has its limits at the point where an inference is made that the relationship between an agent and a disease is causal (general causation) and where the magnitude of excess risk attributed to the agent has been determined; that is, epidemiology addresses whether an agent can cause disease, not whether an agent did cause a specific plaintiff's disease. In United States law, epidemiology alone cannot prove that a causal association does not exist in general. Conversely, it can be (and is in some circumstances) taken by US courts, in an individual case, to justify an inference that a causal association does exist, based upon a balance of probability. The subdiscipline of forensic epidemiology is directed at the investigation of specific causation of disease or injury in individuals or groups of individuals in instances in which causation is disputed or is unclear, for presentation in legal settings. Population-based health management Epidemiological practice and the results of epidemiological analysis make a significant contribution to emerging population-based health management frameworks. Population-based health management encompasses the ability to: Modern population-based health management is complex, requiring a multiple set of skills (medical, political, technological, mathematical, etc.) of which epidemiological practice and analysis is a core component, that is unified with management science to provide efficient and effective health care and health guidance to a population. This task requires the forward-looking ability of modern risk management approaches that transform health risk factors, incidence, prevalence and mortality statistics (derived from epidemiological analysis) into management metrics that not only guide how a health system responds to current population health issues but also how a health system can be managed to better respond to future potential population health issues. Examples of organizations that use population-based health management that leverage the work and results of epidemiological practice include Canadian Strategy for Cancer Control, Health Canada Tobacco Control Programs, Rick Hansen Foundation, Canadian Tobacco Control Research Initiative. Each of these organizations uses a population-based health management framework called Life at Risk that combines epidemiological quantitative analysis with demographics, health agency operational research and economics to perform: Applied field epidemiology Applied epidemiology is the practice of using epidemiological methods to protect or improve the health of a population. Applied field epidemiology can include investigating communicable and non-communicable disease outbreaks, mortality and morbidity rates, and nutritional status, among other indicators of health, with the purpose of communicating the results to those who can implement appropriate policies or disease control measures. As the surveillance and reporting of diseases and other health factors become increasingly difficult in humanitarian crisis situations, the methodologies used to report the data are compromised. One study found that less than half (42.4%) of nutrition surveys sampled from humanitarian contexts correctly calculated the prevalence of malnutrition and only one-third (35.3%) of the surveys met the criteria for quality. Among the mortality surveys, only 3.2% met the criteria for quality. As nutritional status and mortality rates help indicate the severity of a crisis, the tracking and reporting of these health factors is crucial. Vital registries are usually the most effective ways to collect data, but in humanitarian contexts these registries can be non-existent, unreliable, or inaccessible. As such, mortality is often inaccurately measured using either prospective demographic surveillance or retrospective mortality surveys. Prospective demographic surveillance requires much manpower and is difficult to implement in a spread-out population. Retrospective mortality surveys are prone to selection and reporting biases. Other methods are being developed, but are not common practice yet. Characterization, validity, and bias The concept of waves in epidemics has implications especially for communicable diseases. A working definition for the term "epidemic wave" is based on two key features: 1) it comprises periods of upward or downward trends, and 2) these increases or decreases must be substantial and sustained over a period of time, in order to distinguish them from minor fluctuations or reporting errors. The use of a consistent scientific definition is to provide a consistent language that can be used to communicate about and understand the progression of the COVID-19 pandemic, which would aid healthcare organizations and policymakers in resource planning and allocation. Different fields in epidemiology have different levels of validity. One way to assess the validity of findings is the ratio of false-positives (claimed effects that are not correct) to false-negatives (studies which fail to support a true effect). In genetic epidemiology, candidate-gene studies may produce over 100 false-positive findings for each false-negative. By contrast genome-wide association appear close to the reverse, with only one false positive for every 100 or more false-negatives. This ratio has improved over time in genetic epidemiology, as the field has adopted stringent criteria. By contrast, other epidemiological fields have not required such rigorous reporting and are much less reliable as a result. Random error is the result of fluctuations around a true value because of sampling variability. Random error is just that: random. It can occur during data collection, coding, transfer, or analysis. Examples of random errors include poorly worded questions, a misunderstanding in interpreting an individual answer from a particular respondent, or a typographical error during coding. Random error affects measurement in a transient, inconsistent manner and it is impossible to correct for random error. There is a random error in all sampling procedures – sampling error.[citation needed] Precision in epidemiological variables is a measure of random error. Precision is also inversely related to random error, so that to reduce random error is to increase precision. Confidence intervals are computed to demonstrate the precision of relative risk estimates. The narrower the confidence interval, the more precise the relative risk estimate. There are two basic ways to reduce random error in an epidemiological study. The first is to increase the sample size of the study. In other words, add more subjects to your study. The second is to reduce the variability in measurement in the study. This might be accomplished by using a more precise measuring device or by increasing the number of measurements. Note, that if sample size or number of measurements are increased, or a more precise measuring tool is purchased, the costs of the study are usually increased. There is usually an uneasy balance between the need for adequate precision and the practical issue of study cost. A systematic error or bias occurs when there is a difference between the true value (in the population) and the observed value (in the study) from any cause other than sampling variability. An example of systematic error is if, unknown to you, the pulse oximeter you are using is set incorrectly and adds two points to the true value each time a measurement is taken. The measuring device could be precise but not accurate. Because the error happens in every instance, it is systematic. Conclusions you draw based on that data will still be incorrect. But the error can be reproduced in the future (e.g., by using the same mis-set instrument). A mistake in coding that affects all responses for that particular question is another example of a systematic error. The validity of a study is dependent on the degree of systematic error. Validity is usually separated into two components: Selection bias occurs when study subjects are selected or become part of the study as a result of a third, unmeasured variable which is associated with both the exposure and outcome of interest. For instance, it has repeatedly been noted that cigarette smokers and non smokers tend to differ in their study participation rates. (Sackett D cites the example of Seltzer et al., in which 85% of non smokers and 67% of smokers returned mailed questionnaires.) Such a difference in response will not lead to bias if it is not also associated with a systematic difference in outcome between the two response groups. Information bias is bias arising from systematic error in the assessment of a variable. An example of this is recall bias. A typical example is again provided by Sackett in his discussion of a study examining the effect of specific exposures on fetal health: "in questioning mothers whose recent pregnancies had ended in fetal death or malformation (cases) and a matched group of mothers whose pregnancies ended normally (controls) it was found that 28% of the former, but only 20% of the latter, reported exposure to drugs which could not be substantiated either in earlier prospective interviews or in other health records". In this example, recall bias probably occurred as a result of women who had had miscarriages having an apparent tendency to better recall and therefore report previous exposures. Next to sample- and variable-related bias, bias can also arise from an imperfect study design. One example is immortal time bias, where during study period, there is some interval during which the outcome event cannot occur (making these individual "immortal"). Confounding has traditionally been defined as bias arising from the co-occurrence or mixing of effects of extraneous factors, referred to as confounders, with the main effect(s) of interest. A more recent definition of confounding invokes the notion of counterfactual effects. According to this view, when one observes an outcome of interest, say Y=1 (as opposed to Y=0), in a given population A which is entirely exposed (i.e. exposure X = 1 for every unit of the population) the risk of this event will be RA1. The counterfactual or unobserved risk RA0 corresponds to the risk which would have been observed if these same individuals had been unexposed (i.e. X = 0 for every unit of the population). The true effect of exposure therefore is: RA1 − RA0 (if one is interested in risk differences) or RA1/RA0 (if one is interested in relative risk). Since the counterfactual risk RA0 is unobservable we approximate it using a second population B and we actually measure the following relations: RA1 − RB0 or RA1/RB0. In this situation, confounding occurs when RA0 ≠ RB0. (NB: Example assumes binary outcome and exposure variables.) Some epidemiologists prefer to think of confounding separately from common categorizations of bias since, unlike selection and information bias, confounding stems from real causal effects. The profession Few universities have offered epidemiology as a course of study at the undergraduate level.[citation needed] An undergraduate program exists at Johns Hopkins University in which students who major in public health can take graduate-level courses—including epidemiology—during their senior year at the Bloomberg School of Public Health. In addition to its master's and doctoral degrees in epidemiology, the University of Michigan School of Public Health has offered undergraduate degree programs since 2017 that include coursework in epidemiology. Although epidemiologic research is conducted by individuals from diverse disciplines, variable levels of training in epidemiologic methods are provided during pharmacy, medical, veterinary, social work, podiatry, nursing, physical therapy, and clinical psychology doctoral programs in addition to the formal training master's and doctoral students in public health fields receive. As public health practitioners, epidemiologists work in a number of different settings. Some epidemiologists work "in the field" (i.e., in the community; commonly[according to whom?] in a public health service), and are often at the forefront of investigating and combating disease outbreaks.[citation needed] Others work for non-profit organizations, universities, hospitals, or larger government entities (e.g., state and local health departments in the United States), ministries of health, Doctors without Borders, the Centers for Disease Control and Prevention (CDC), the Health Protection Agency, the World Health Organization (WHO), or the Public Health Agency of Canada. Epidemiologists can also work in for-profit organizations (e.g., pharmaceutical and medical device companies) in groups such as market research or clinical development. An April 2020 University of Southern California article noted that, "The coronavirus epidemic... thrust epidemiology – the study of the incidence, distribution and control of disease in a population – to the forefront of scientific disciplines across the globe and even made temporary celebrities out of some of its practitioners." See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Meta_Platforms#cite_note-51] | [TOKENS: 8626]
Contents Meta Platforms Meta Platforms, Inc. (doing business as Meta) is an American multinational technology company headquartered in Menlo Park, California. Meta owns and operates several prominent social media platforms and communication services, including Facebook, Instagram, WhatsApp, Messenger, Threads and Manus. The company also operates an advertising network for its own sites and third parties; as of 2023[update], advertising accounted for 97.8 percent of its total revenue. Meta has been described as a part of Big Tech, which refers to the largest six tech companies in the United States, Alphabet (Google), Amazon, Apple, Meta (Facebook), Microsoft, and Nvidia, which are also the largest companies in the world by market capitalization. The company was originally established in 2004 as TheFacebook, Inc., and was renamed Facebook, Inc. in 2005. In 2021, it rebranded as Meta Platforms, Inc. to reflect a strategic shift toward developing the metaverse—an interconnected digital ecosystem spanning virtual and augmented reality technologies. In 2023, Meta was ranked 31st on the Forbes Global 2000 list of the world's largest public companies. As of 2022, it was the world's third-largest spender on research and development, with R&D expenses totaling US$35.3 billion. History Facebook filed for an initial public offering (IPO) on January 1, 2012. The preliminary prospectus stated that the company sought to raise $5 billion, had 845 million monthly active users, and a website accruing 2.7 billion likes and comments daily. After the IPO, Zuckerberg would retain 22% of the total shares and 57% of the total voting power in Facebook. Underwriters valued the shares at $38 each, valuing the company at $104 billion, the largest valuation yet for a newly public company. On May 16, one day before the IPO, Facebook announced it would sell 25% more shares than originally planned due to high demand. The IPO raised $16 billion, making it the third-largest in US history (slightly ahead of AT&T Mobility and behind only General Motors and Visa). The stock price left the company with a higher market capitalization than all but a few U.S. corporations—surpassing heavyweights such as Amazon, McDonald's, Disney, and Kraft Foods—and made Zuckerberg's stock worth $19 billion. The New York Times stated that the offering overcame questions about Facebook's difficulties in attracting advertisers to transform the company into a "must-own stock". Jimmy Lee of JPMorgan Chase described it as "the next great blue-chip". Writers at TechCrunch, on the other hand, expressed skepticism, stating, "That's a big multiple to live up to, and Facebook will likely need to add bold new revenue streams to justify the mammoth valuation." Trading in the stock, which began on May 18, was delayed that day due to technical problems with the Nasdaq exchange. The stock struggled to stay above the IPO price for most of the day, forcing underwriters to buy back shares to support the price. At the closing bell, shares were valued at $38.23, only $0.23 above the IPO price and down $3.82 from the opening bell value. The opening was widely described by the financial press as a disappointment. The stock set a new record for trading volume of an IPO. On May 25, 2012, the stock ended its first full week of trading at $31.91, a 16.5% decline. On May 22, 2012, regulators from Wall Street's Financial Industry Regulatory Authority announced that they had begun to investigate whether banks underwriting Facebook had improperly shared information only with select clients rather than the general public. Massachusetts Secretary of State William F. Galvin subpoenaed Morgan Stanley over the same issue. The allegations sparked "fury" among some investors and led to the immediate filing of several lawsuits, one of them a class action suit claiming more than $2.5 billion in losses due to the IPO. Bloomberg estimated that retail investors may have lost approximately $630 million on Facebook stock since its debut. S&P Global Ratings added Facebook to its S&P 500 index on December 21, 2013. On May 2, 2014, Zuckerberg announced that the company would be changing its internal motto from "Move fast and break things" to "Move fast with stable infrastructure". The earlier motto had been described as Zuckerberg's "prime directive to his developers and team" in a 2009 interview in Business Insider, in which he also said, "Unless you are breaking stuff, you are not moving fast enough." In November 2016, Facebook announced the Microsoft Windows client of gaming service Facebook Gameroom, formerly Facebook Games Arcade, at the Unity Technologies developers conference. The client allows Facebook users to play "native" games in addition to its web games. The service was closed in June 2021. Lasso was a short-video sharing app from Facebook similar to TikTok that was launched on iOS and Android in 2018 and was aimed at teenagers. On July 2, 2020, Facebook announced that Lasso would be shutting down on July 10. In 2018, the Oculus lead Jason Rubin sent his 50-page vision document titled "The Metaverse" to Facebook's leadership. In the document, Rubin acknowledged that Facebook's virtual reality business had not caught on as expected, despite the hundreds of millions of dollars spent on content for early adopters. He also urged the company to execute fast and invest heavily in the vision, to shut out HTC, Apple, Google and other competitors in the VR space. Regarding other players' participation in the metaverse vision, he called for the company to build the "metaverse" to prevent their competitors from "being in the VR business in a meaningful way at all". In May 2019, Facebook founded Libra Networks, reportedly to develop their own stablecoin cryptocurrency. Later, it was reported that Libra was being supported by financial companies such as Visa, Mastercard, PayPal and Uber. The consortium of companies was expected to pool in $10 million each to fund the launch of the cryptocurrency coin named Libra. Depending on when it would receive approval from the Swiss Financial Market Supervisory authority to operate as a payments service, the Libra Association had planned to launch a limited format cryptocurrency in 2021. Libra was renamed Diem, before being shut down and sold in January 2022 after backlash from Swiss government regulators and the public. During the COVID-19 pandemic, the use of online services, including Facebook, grew globally. Zuckerberg predicted this would be a "permanent acceleration" that would continue after the pandemic. Facebook hired aggressively, growing from 48,268 employees in March 2020 to more than 87,000 by September 2022. Following a period of intense scrutiny and damaging whistleblower leaks, news started to emerge on October 21, 2021 about Facebook's plan to rebrand the company and change its name. In the Q3 2021 earnings call on October 25, Mark Zuckerberg discussed the ongoing criticism of the company's social services and the way it operates, and pointed to the pivoting efforts to building the metaverse – without mentioning the rebranding and the name change. The metaverse vision and the name change from Facebook, Inc. to Meta Platforms was introduced at Facebook Connect on October 28, 2021. Based on Facebook's PR campaign, the name change reflects the company's shifting long term focus of building the metaverse, a digital extension of the physical world by social media, virtual reality and augmented reality features. "Meta" had been registered as a trademark in the United States in 2018 (after an initial filing in 2015) for marketing, advertising, and computer services, by a Canadian company that provided big data analysis of scientific literature. This company was acquired in 2017 by the Chan Zuckerberg Initiative (CZI), a foundation established by Zuckerberg and his wife, Priscilla Chan, and became one of their projects. Following the rebranding announcement, CZI announced that it had already decided to deprioritize the earlier Meta project, thus it would be transferring its rights to the name to Meta Platforms, and the previous project would end in 2022. Soon after the rebranding, in early February 2022, Meta reported a greater-than-expected decline in profits in the fourth quarter of 2021. It reported no growth in monthly users, and indicated it expected revenue growth to stall. It also expected measures taken by Apple Inc. to protect user privacy to cost it some $10 billion in advertisement revenue, an amount equal to roughly 8% of its revenue for 2021. In meeting with Meta staff the day after earnings were reported, Zuckerberg blamed competition for user attention, particularly from video-based apps such as TikTok. The 27% reduction in the company's share price which occurred in reaction to the news eliminated some $230 billion of value from Meta's market capitalization. Bloomberg described the decline as "an epic rout that, in its sheer scale, is unlike anything Wall Street or Silicon Valley has ever seen". Zuckerberg's net worth fell by as much as $31 billion. Zuckerberg owns 13% of Meta, and the holding makes up the bulk of his wealth. According to published reports by Bloomberg on March 30, 2022, Meta turned over data such as phone numbers, physical addresses, and IP addresses to hackers posing as law enforcement officials using forged documents. The law enforcement requests sometimes included forged signatures of real or fictional officials. When asked about the allegations, a Meta representative said, "We review every data request for legal sufficiency and use advanced systems and processes to validate law enforcement requests and detect abuse." In June 2022, Sheryl Sandberg, the chief operating officer of 14 years, announced she would step down that year. Zuckerberg said that Javier Olivan would replace Sandberg, though in a “more traditional” role. In March 2022, Meta (except Meta-owned WhatsApp) and Instagram were banned in Russia and added to the Russian list of terrorist and extremist organizations for alleged Russophobia and hate speech (up to genocidal calls) amid the ongoing Russian invasion of Ukraine. Meta appealed against the ban, but it was upheld by a Moscow court in June of the same year. Also in March 2022, Meta and Italian eyewear giant Luxottica released Ray-Ban Stories, a series of smartglasses which could play music and take pictures. Meta and Luxottica parent company EssilorLuxottica declined to disclose sales on the line of products as of September 2022, though Meta has expressed satisfaction with its customer feedback. In July 2022, Meta saw its first year-on-year revenue decline when its total revenue slipped by 1% to $28.8bn. Analysts and journalists accredited the loss to its advertising business, which has been limited by Apple's app tracking transparency feature and the number of people who have opted not to be tracked by Meta apps. Zuckerberg also accredited the decline to increasing competition from TikTok. On October 27, 2022, Meta's market value dropped to $268 billion, a loss of around $700 billion compared to 2021, and its shares fell by 24%. It lost its spot among the top 20 US companies by market cap, despite reaching the top 5 in the previous year. In November 2022, Meta laid off 11,000 employees, 13% of its workforce. Zuckerberg said the decision to aggressively increase Meta's investments had been a mistake, as he had wrongly predicted that the surge in e-commerce would last beyond the COVID-19 pandemic. He also attributed the decline to increased competition, a global economic downturn and "ads signal loss". Plans to lay off a further 10,000 employees began in April 2023. The layoffs were part of a general downturn in the technology industry, alongside layoffs by companies including Google, Amazon, Tesla, Snap, Twitter and Lyft. Starting from 2022, Meta scrambled to catch up to other tech companies in adopting specialized artificial intelligence hardware and software. It had been using less expensive CPUs instead of GPUs for AI work, but that approach turned out to be less efficient. The company gifted the Inter-university Consortium for Political and Social Research $1.3 million to finance the Social Media Archive's aim to make their data available to social science research. In 2023, Ireland's Data Protection Commissioner imposed a record EUR 1.2 billion fine on Meta for transferring data from Europe to the United States without adequate protections for EU citizens.: 250 In March 2023, Meta announced a new round of layoffs that would cut 10,000 employees and close 5,000 open positions to make the company more efficient. Meta revenue surpassed analyst expectations for the first quarter of 2023 after announcing that it was increasing its focus on AI. On July 6, Meta launched a new app, Threads, a competitor to Twitter. Meta announced its artificial intelligence model Llama 2 in July 2023, available for commercial use via partnerships with major cloud providers like Microsoft. It was the first project to be unveiled out of Meta's generative AI group after it was set up in February. It would not charge access or usage but instead operate with an open-source model to allow Meta to ascertain what improvements need to be made. Prior to this announcement, Meta said it had no plans to release Llama 2 for commercial use. An earlier version of Llama was released to academics. In August 2023, Meta announced its permanent removal of news content from Facebook and Instagram in Canada due to the Online News Act, which requires Canadian news outlets to be compensated for content shared on its platform. The Online News Act was in effect by year-end, but Meta will not participate in the regulatory process. In October 2023, Zuckerberg said that AI would be Meta's biggest investment area in 2024. Meta finished 2023 as one of the best-performing technology stocks of the year, with its share price up 150 percent. Its stock reached an all-time high in January 2024, bringing Meta within 2% of achieving $1 trillion market capitalization. In November 2023 Meta Platforms launched an ad-free service in Europe, allowing subscribers to opt-out of personal data being collected for targeted advertising. A group of 28 European organizations, including Max Schrems' advocacy group NOYB, the Irish Council for Civil Liberties, Wikimedia Europe, and the Electronic Privacy Information Center, signed a 2024 letter to the European Data Protection Board (EDPB) expressing concern that this subscriber model would undermine privacy protections, specifically GDPR data protection standards. Meta removed the Facebook and Instagram accounts of Iran's Supreme Leader Ali Khamenei in February 2024, citing repeated violations of its Dangerous Organizations & Individuals policy. As of March, Meta was under investigation by the FDA for alleged use of their social media platforms to sell illegal drugs. On 16 May 2024, the European Commission began an investigation into Meta over concerns related to child safety. In May 2023, Iraqi social media influencer Esaa Ahmed-Adnan encountered a troubling issue when Instagram removed his posts, citing false copyright violations despite his content being original and free from copyrighted material. He discovered that extortionists were behind these takedowns, offering to restore his content for $3,000 or provide ongoing protection for $1,000 per month. This scam, exploiting Meta’s rights management tools, became widespread in the Middle East, revealing a gap in Meta’s enforcement in developing regions. An Iraqi nonprofit Tech4Peace’s founder, Aws al-Saadi helped Ahmed-Adnan and others, but the restoration process was slow, leading to significant financial losses for many victims, including prominent figures like Ammar al-Hakim. This situation highlighted Meta’s challenges in balancing global growth with effective content moderation and protection. On 16 September 2024, Meta announced it had banned Russian state media outlets from its platforms worldwide due to concerns about "foreign interference activity." This decision followed allegations that RT and its employees funneled $10 million through shell companies to secretly fund influence campaigns on various social media channels. Meta's actions were part of a broader effort to counter Russian covert influence operations, which had intensified since the invasion. At its 2024 Connect conference, Meta presented Orion, its first pair of augmented reality glasses. Though Orion was originally intended to be sold to consumers, the manufacturing process turned out to be too complex and expensive. Instead, the company pivoted to producing a small number of the glasses to be used internally. On 4 October 2024, Meta announced about its new AI model called Movie Gen, capable of generating realistic video and audio clips based on user prompts. Meta stated it would not release Movie Gen for open development, preferring to collaborate directly with content creators and integrate it into its products by the following year. The model was built using a combination of licensed and publicly available datasets. On October 31, 2024, ProPublica published an investigation into deceptive political advertisement scams that sometimes use hundreds of hijacked profiles and facebook pages run by organized networks of scammers. The authors cited spotty enforcement by Meta as a major reason for the extent of the issue. In November 2024, TechCrunch reported that Meta were considering building a $10bn global underwater cable spanning 25,000 miles. In the same month, Meta closed down 2 million accounts on Facebook and Instagram that were linked to scam centers in Myanmar, Laos, Cambodia, the Philippines, and the United Arab Emirates doing pig butchering scams. In December 2024, Meta announced that, beginning February 2025, they would require advertisers to run ads about financial services in Australia to verify information about who are the beneficiary and the payer in a bid to regulate scams. On December 4, 2024, Meta announced it will invest US$10 billion for its largest AI data center in northeast Louisiana, powered by natural gas facilities. On the 11th of that month, Meta experienced a global outage, impacting accounts on all of their social media and messaging applications. Outage reports from DownDetector reached 70,000+ and 100,000+ within minutes for Instagram and Facebook, respectively. In January 2025, Meta announced plans to roll back its diversity, equity, and inclusion (DEI) initiatives, citing shifts in the "legal and policy landscape" in the United States following the 2024 presidential election. The decision followed reports that CEO Mark Zuckerberg sought to align the company more closely with the incoming Trump administration, including changes to content moderation policies and executive leadership. The new content moderation policies continued to bar insults about a person's intellect or mental illness, but made an exception to allow calling LGBTQ people mentally ill because they are gay or transgender. Later that month, Meta agreed to pay $25 million to settle a 2021 lawsuit brought by Donald Trump for suspending his social media accounts after the January 6 riots. Changes to Meta's moderation policies were controversial among its oversight board, with a significant divide in opinion between the board's US conservatives and its global members. In June 2025, Meta Platforms Inc. has decided to make a multibillion-dollar investment into artificial intelligence startup Scale AI. The financing could exceed $10 billion in value which would make it one of the largest private company funding events of all time. In October 2025, it was announced that Meta would be laying off 600 employees in the artificial intelligence unit to perform better and simpler. They referred to their AI unit as "bloated" and are seeking to trim down the department. This mass layoff is going to impact Meta’s AI infrastructure units, Fundamental Artificial Intelligence Research unit (FAIR) and other product-related positions. Mergers and acquisitions Meta has acquired multiple companies (often identified as talent acquisitions). One of its first major acquisitions was in April 2012, when it acquired Instagram for approximately US$1 billion in cash and stock. In October 2013, Facebook, Inc. acquired Onavo, an Israeli mobile web analytics company. In February 2014, Facebook, Inc. announced it would buy mobile messaging company WhatsApp for US$19 billion in cash and stock. The acquisition was completed on October 6. Later that year, Facebook bought Oculus VR for $2.3 billion in cash and stock, which released its first consumer virtual reality headset in 2016. In late November 2019, Facebook, Inc. announced the acquisition of the game developer Beat Games, responsible for developing one of that year's most popular VR games, Beat Saber. In Late 2022, after Facebook Inc rebranded to Meta Platforms Inc, Oculus was rebranded to Meta Quest. In May 2020, Facebook, Inc. announced it had acquired Giphy for a reported cash price of $400 million. It will be integrated with the Instagram team. However, in August 2021, UK's Competition and Markets Authority (CMA) stated that Facebook, Inc. might have to sell Giphy, after an investigation found that the deal between the two companies would harm competition in display advertising market. Facebook, Inc. was fined $70 million by CMA for deliberately failing to report all information regarding the acquisition and the ongoing antitrust investigation. In October 2022, the CMA ruled for a second time that Meta be required to divest Giphy, stating that Meta already controls half of the advertising in the UK. Meta agreed to the sale, though it stated that it disagrees with the decision itself. In May 2023, Giphy was divested to Shutterstock for $53 million. In November 2020, Facebook, Inc. announced that it planned to purchase the customer-service platform and chatbot specialist startup Kustomer to promote companies to use their platform for business. It has been reported that Kustomer valued at slightly over $1 billion. The deal was closed in February 2022 after regulatory approval. In September 2022, Meta acquired Lofelt, a Berlin-based haptic tech startup. In December 2025, it was announced Meta had acquired the AI-wearables startup, Limitless. In the same month, they also acquired another AI startup, Manus AI, for $2 billion. Manus announced in December that its platform had achieved $100mm in recurring revenue just 8 months after its launch and Meta said it will scale the platform to many other businesses. In January 2026, it was announced Meta proposed acquisition of Manus was undergoing preliminary scrutiny by Chinese regulators. The examination concerns the cross-border transfer of artificial intelligence technology developed in China. Lobbying In 2020, Facebook, Inc. spent $19.7 million on lobbying, hiring 79 lobbyists. In 2019, it had spent $16.7 million on lobbying and had a team of 71 lobbyists, up from $12.6 million and 51 lobbyists in 2018. Facebook was the largest spender of lobbying money among the Big Tech companies in 2020. The lobbying team includes top congressional aide John Branscome, who was hired in September 2021, to help the company fend off threats from Democratic lawmakers and the Biden administration. In December 2024, Meta donated $1 million to the inauguration fund for then-President-elect Donald Trump. In 2025, Meta was listed among the donors funding the construction of the White House State Ballroom. Partnerships February 2026, Meta announced a long-term partnership with Nvidia. Censorship In August 2024, Mark Zuckerberg sent a letter to Jim Jordan indicating that during the COVID-19 pandemic the Biden administration repeatedly asked Meta to limit certain COVID-19 content, including humor and satire, on Facebook and Instagram. In 2016 Meta hired Jordana Cutler, formerly an employee at the Israeli Embassy to the United States, as its policy chief for Israel and the Jewish Diaspora. In this role, Cutler pushed for the censorship of accounts belonging to Students for Justice in Palestine chapters in the United States. Critics have said that Cutler's position gives the Israeli government an undue influence over Meta policy, and that few countries have such high levels of contact with Meta policymakers. Following the election of Donald Trump in 2025, various sources noted possible censorship related to the Democratic Party on Instagram and other Meta platforms. In February 2025, a Meta rep flagged journalist Gil Duran's article and other "critiques of tech industry figures" as spam or sensitive content, limiting their reach. In March 2025, Meta attempted to block former employee Sarah Wynn-Williams from promoting or further distributing her memoir, Careless People, that includes allegations of unaddressed sexual harassment in the workplace by senior executives. The New York Times reports that the arbitration is among Meta's most forcible attempts to repudiate a former employee's account of workplace dynamics. Publisher Macmillan reacted to the ruling by the Emergency International Arbitral Tribunal by stating that it will ignore its provisions. As of 15 March 2025[update], hardback and digital versions of Careless People were being offered for sale by major online retailers. From October 2025, Meta began removing and restricting access for accounts related to LGBTQ, reproductive health and abortion information pages on its platforms. Martha Dimitratou, executive director of Repro Uncensored, called Meta's shadow-banning of these issues "One of the biggest waves of censorship we are seeing". Disinformation concerns Since its inception, Meta has been accused of being a host for fake news and misinformation. In the wake of the 2016 United States presidential election, Zuckerberg began to take steps to eliminate the prevalence of fake news, as the platform had been criticized for its potential influence on the outcome of the election. The company initially partnered with ABC News, the Associated Press, FactCheck.org, Snopes and PolitiFact for its fact-checking initiative; as of 2018, it had over 40 fact-checking partners across the world, including The Weekly Standard. A May 2017 review by The Guardian found that the platform's fact-checking initiatives of partnering with third-party fact-checkers and publicly flagging fake news were regularly ineffective, and appeared to be having minimal impact in some cases. In 2018, journalists working as fact-checkers for the company criticized the partnership, stating that it had produced minimal results and that the company had ignored their concerns. In 2024 Meta's decision to continue to disseminate a falsified video of US president Joe Biden, even after it had been proven to be fake, attracted criticism and concern. In January 2025, Meta ended its use of third-party fact-checkers in favor of a user-run community notes system similar to the one used on X. While Zuckerberg supported these changes, saying that the amount of censorship on the platform was excessive, the decision received criticism by fact-checking institutions, stating that the changes would make it more difficult for users to identify misinformation. Meta also faced criticism for weakening its policies on hate speech that were designed to protect minorities and LGBTQ+ individuals from bullying and discrimination. While moving its content review teams from California to Texas, Meta changed their hateful conduct policy to eliminate restrictions on anti-LGBT and anti-immigrant hate speech, as well as explicitly allowing users to accuse LGBT people of being mentally ill or abnormal based on their sexual orientation or gender identity. In January 2025, Meta faced significant criticism for its role in removing LGBTQ+ content from its platforms, amid its broader efforts to address anti-LGBTQ+ hate speech. The removal of LGBTQ+ themes was noted as part of the wider crackdown on content deemed to violate its community guidelines. Meta's content moderation policies, which were designed to combat harmful speech and protect users from discrimination, inadvertently led to the removal or restriction of LGBTQ+ content, particularly posts highlighting LGBTQ+ identities, support, or political issues. According to reports, LGBTQ+ posts, including those that simply celebrated pride or advocated for LGBTQ+ rights, were flagged and removed for reasons that some critics argue were vague or inconsistently applied. Many LGBTQ+ activists and users on Meta's platforms expressed concern that such actions stifled visibility and expression, potentially isolating LGBTQ+ individuals and communities, especially in spaces that were historically important for outreach and support. Lawsuits Numerous lawsuits have been filed against the company, both when it was known as Facebook, Inc., and as Meta Platforms. In March 2020, the Office of the Australian Information Commissioner (OAIC) sued Facebook, for significant and persistent infringements of the rule on privacy involving the Cambridge Analytica fiasco. Every violation of the Privacy Act is subject to a theoretical cumulative liability of $1.7 million. The OAIC estimated that a total of 311,127 Australians had been exposed. On December 8, 2020, the U.S. Federal Trade Commission and 46 states (excluding Alabama, Georgia, South Carolina, and South Dakota), the District of Columbia and the territory of Guam, launched Federal Trade Commission v. Facebook as an antitrust lawsuit against Facebook. The lawsuit concerns Facebook's acquisition of two competitors—Instagram and WhatsApp—and the ensuing monopolistic situation. FTC alleges that Facebook holds monopolistic power in the U.S. social networking market and seeks to force the company to divest from Instagram and WhatsApp to break up the conglomerate. William Kovacic, a former chairman of the Federal Trade Commission, argued the case will be difficult to win as it would require the government to create a counterfactual argument of an internet where the Facebook-WhatsApp-Instagram entity did not exist, and prove that harmed competition or consumers. In November 2025, it was ruled that Meta did not violate antitrust laws and holds no monopoly in the market. On December 24, 2021, a court in Russia fined Meta for $27 million after the company declined to remove unspecified banned content. The fine was reportedly tied to the company's annual revenue in the country. In May 2022, a lawsuit was filed in Kenya against Meta and its local outsourcing company Sama. Allegedly, Meta has poor working conditions in Kenya for workers moderating Facebook posts. According to the lawsuit, 260 screeners were declared redundant with confusing reasoning. The lawsuit seeks financial compensation and an order that outsourced moderators be given the same health benefits and pay scale as Meta employees. In June 2022, 8 lawsuits were filed across the U.S. over the allege that excessive exposure to platforms including Facebook and Instagram has led to attempted or actual suicides, eating disorders and sleeplessness, among other issues. The litigation follows a former Facebook employee's testimony in Congress that the company refused to take responsibility. The company noted that tools have been developed for parents to keep track of their children's activity on Instagram and set time limits, in addition to Meta's "Take a break" reminders. In addition, the company is providing resources specific to eating disorders as well as developing AI to prevent children under the age of 13 signing up for Facebook or Instagram. In June 2022, Meta settled a lawsuit with the US Department of Justice. The lawsuit, which was filed in 2019, alleged that the company enabled housing discrimination through targeted advertising, as it allowed homeowners and landlords to run housing ads excluding people based on sex, race, religion, and other characteristics. The U.S. Department of Justice stated that this was in violation of the Fair Housing Act. Meta was handed a penalty of $115,054 and given until December 31, 2022, to shadow the algorithm tool. In January 2023, Meta was fined €390 million for violations of the European Union General Data Protection Regulation. In May 2023, the European Data Protection Board fined Meta a record €1.2 billion for breaching European Union data privacy laws by transferring personal data of Facebook users to servers in the U.S. In July 2024, Meta agreed to pay the state of Texas US$1.4 billion to settle a lawsuit brought by Texas Attorney General Ken Paxton accusing the company of collecting users' biometric data without consent, setting a record for the largest privacy-related settlement ever obtained by a state attorney general. In October 2024, Meta Platforms faced lawsuits in Japan from 30 plaintiffs who claimed they were defrauded by fake investment ads on Facebook and Instagram, featuring false celebrity endorsements. The plaintiffs are seeking approximately $2.8 million in damages. In April 2025, the Kenyan High Court ruled that a US$2.4 billion lawsuit in which three plaintiffs claim that Facebook inflamed civil violence in Ethiopia in 2021 could proceed. In April 2025, Meta was fined €200 million ($230 million) for breaking the Digital Markets Act, by imposing a “consent or pay” system that forces users to either allow their personal data to be used to target advertisements, or pay a subscription fee for advertising-free versions of Facebook and Instagram. In late April 2025, a case was filed against Meta in Ghana over the alleged psychological distress experienced by content moderators employed to take down disturbing social media content including depictions of murders, extreme violence and child sexual abuse. Meta moved the moderation service to the Ghanaian capital of Accra after legal issues in the previous location Kenya. The new moderation company is Teleperformance, a multinational corporation with a history of worker's rights violation. Reports suggests the conditions are worse here than in the previous Kenyan location, with many workers afraid of speaking out due to fear of returning to conflict zones. Workers reported developing mental illnesses, attempted suicides, and low pay. In 26 January 2026, a New Mexico state court case was filed, suggesting that Mark Zuckerberg approved allowing minors to access artificial intelligence chatbot companions that safety staffers warned were capable of sexual interactions. In 2020, the company UReputation, which had been involved in several cases concerning the management of digital armies[clarification needed], filed a lawsuit against Facebook, accusing it of unlawfully transmitting personal data to third parties. Legal actions were initiated in Tunisia, France, and the United States. In 2025, the United States District court for the Northern District of Georgia approved a discovery procedure, allowing UReputation to access documents and evidence held by Meta. Structure Meta's key management consists of: As of October 2022[update], Meta had 83,553 employees worldwide. As of June 2024[update], Meta's board consisted of the following directors; Meta Platforms is mainly owned by institutional investors, who hold around 80% of all shares. Insiders control the majority of voting shares. The three largest individual investors in 2024 were Mark Zuckerberg, Sheryl Sandberg and Christopher K. Cox. The largest shareholders in late 2024/early 2025 were: Roger McNamee, an early Facebook investor and Zuckerberg's former mentor, said Facebook had "the most centralized decision-making structure I have ever encountered in a large company". Facebook co-founder Chris Hughes has stated that chief executive officer Mark Zuckerberg has too much power, that the company is now a monopoly, and that, as a result, it should be split into multiple smaller companies. In an op-ed in The New York Times, Hughes said he was concerned that Zuckerberg had surrounded himself with a team that did not challenge him, and that it is the U.S. government's job to hold him accountable and curb his "unchecked power". He also said that "Mark's power is unprecedented and un-American." Several U.S. politicians agreed with Hughes. European Union Commissioner for Competition Margrethe Vestager stated that splitting Facebook should be done only as "a remedy of the very last resort", and that it would not solve Facebook's underlying problems. Revenue Facebook ranked No. 34 in the 2020 Fortune 500 list of the largest United States corporations by revenue, with almost $86 billion in revenue most of it coming from advertising. One analysis of 2017 data determined that the company earned US$20.21 per user from advertising. According to New York, since its rebranding, Meta has reportedly lost $500 billion as a result of new privacy measures put in place by companies such as Apple and Google which prevents Meta from gathering users' data. In February 2015, Facebook announced it had reached two million active advertisers, with most of the gain coming from small businesses. An active advertiser was defined as an entity that had advertised on the Facebook platform in the last 28 days. In March 2016, Facebook announced it had reached three million active advertisers with more than 70% from outside the United States. Prices for advertising follow a variable pricing model based on auctioning ad placements, and potential engagement levels of the advertisement itself. Similar to other online advertising platforms like Google and Twitter, targeting of advertisements is one of the chief merits of digital advertising compared to traditional media. Marketing on Meta is employed through two methods based on the viewing habits, likes and shares, and purchasing data of the audience, namely targeted audiences and "look alike" audiences. The U.S. IRS challenged the valuation Facebook used when it transferred IP from the U.S. to Facebook Ireland (now Meta Platforms Ireland) in 2010 (which Facebook Ireland then revalued higher before charging out), as it was building its double Irish tax structure. The case is ongoing and Meta faces a potential fine of $3–5bn. The U.S. Tax Cuts and Jobs Act of 2017 changed Facebook's global tax calculations. Meta Platforms Ireland is subject to the U.S. GILTI tax of 10.5% on global intangible profits (i.e. Irish profits). On the basis that Meta Platforms Ireland Limited is paying some tax, the effective minimum US tax for Facebook Ireland will be circa 11%. In contrast, Meta Platforms Inc. would incur a special IP tax rate of 13.125% (the FDII rate) if its Irish business relocated to the U.S. Tax relief in the U.S. (21% vs. Irish at the GILTI rate) and accelerated capital expensing, would make this effective U.S. rate around 12%. The insignificance of the U.S./Irish tax difference was demonstrated when Facebook moved 1.5bn non-EU accounts to the U.S. to limit exposure to GDPR. Facilities Users outside of the U.S. and Canada contract with Meta's Irish subsidiary, Meta Platforms Ireland Limited (formerly Facebook Ireland Limited), allowing Meta to avoid US taxes for all users in Europe, Asia, Australia, Africa and South America. Meta is making use of the Double Irish arrangement which allows it to pay 2–3% corporation tax on all international revenue. In 2010, Facebook opened its fourth office, in Hyderabad, India, which houses online advertising and developer support teams and provides support to users and advertisers. In India, Meta is registered as Facebook India Online Services Pvt Ltd. It also has offices or planned sites in Chittagong, Bangladesh; Dublin, Ireland; and Austin, Texas, among other cities. Facebook opened its London headquarters in 2017 in Fitzrovia in central London. Facebook opened an office in Cambridge, Massachusetts in 2018. The offices were initially home to the "Connectivity Lab", a group focused on bringing Internet access to those who do not have access to the Internet. In April 2019, Facebook opened its Taiwan headquarters in Taipei. In March 2022, Meta opened new regional headquarters in Dubai. In September 2023, it was reported that Meta had paid £149m to British Land to break the lease on Triton Square London office. Meta reportedly had another 18 years left on its lease on the site. As of 2023, Facebook operated 21 data centers. It committed to purchase 100% renewable energy and to reduce its greenhouse gas emissions 75% by 2020. Its data center technologies include Fabric Aggregator, a distributed network system that accommodates larger regions and varied traffic patterns. Reception US Representative Alexandria Ocasio-Cortez responded in a tweet to Zuckerberg's announcement about Meta, saying: "Meta as in 'we are a cancer to democracy metastasizing into a global surveillance and propaganda machine for boosting authoritarian regimes and destroying civil society ... for profit!'" Ex-Facebook employee Frances Haugen and whistleblower behind the Facebook Papers responded to the rebranding efforts by expressing doubts about the company's ability to improve while led by Mark Zuckerberg, and urged the chief executive officer to resign. In November 2021, a video published by Inspired by Iceland went viral, in which a Zuckerberg look-alike promoted the Icelandverse, a place of "enhanced actual reality without silly looking headsets". In a December 2021 interview, SpaceX and Tesla chief executive officer Elon Musk said he could not see a compelling use-case for the VR-driven metaverse, adding: "I don't see someone strapping a frigging screen to their face all day." In January 2022, Louise Eccles of The Sunday Times logged into the metaverse with the intention of making a video guide. She wrote: Initially, my experience with the Oculus went well. I attended work meetings as an avatar and tried an exercise class set in the streets of Paris. The headset enabled me to feel the thrill of carving down mountains on a snowboard and the adrenaline rush of climbing a mountain without ropes. Yet switching to the social apps, where you mingle with strangers also using VR headsets, it was at times predatory and vile. Eccles described being sexually harassed by another user, as well as "accents from all over the world, American, Indian, English, Australian, using racist, sexist, homophobic and transphobic language". She also encountered users as young as 7 years old on the platform, despite Oculus headsets being intended for users over 13. See also References External links 37°29′06″N 122°08′54″W / 37.48500°N 122.14833°W / 37.48500; -122.14833
========================================
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_note-GPro87-185] | [TOKENS: 10728]
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Extraterrestrial_life#cite_note-51] | [TOKENS: 11349]
Contents Extraterrestrial life Extraterrestrial life, or alien life (colloquially aliens), is life that originates from another world rather than on Earth. No extraterrestrial life has yet been scientifically or conclusively detected. Such life might range from simple forms such as prokaryotes to intelligent beings, possibly bringing forth civilizations that might be far more, or far less, advanced than humans. The Drake equation speculates about the existence of sapient life elsewhere in the universe. The science of extraterrestrial life is known as astrobiology. Speculation about inhabited worlds beyond Earth dates back to antiquity. Early Christian writers, including Augustine, discussed ideas from thinkers like Democritus and Epicurus about countless worlds in the vast universe. Pre-modern writers typically assumed extraterrestrial "worlds" were inhabited by living beings. William Vorilong, in the 15th century, acknowledged the possibility Jesus could have visited extraterrestrial worlds to redeem their inhabitants.: 26 In 1440, Nicholas of Cusa suggested Earth is a "brilliant star"; he theorized that all celestial bodies, even the Sun, could host life. Descartes wrote that there were no means to prove the stars were not inhabited by "intelligent creatures", but their existence was a matter of speculation.: 67 In comparison to the life-abundant Earth, the vast majority of intrasolar and extrasolar planets and moons have harsh surface conditions and disparate atmospheric chemistry, or lack an atmosphere. However, there are many extreme and chemically harsh ecosystems on Earth that do support forms of life and are often hypothesized to be the origin of life on Earth. Examples include life surrounding hydrothermal vents, acidic hot springs, and volcanic lakes, as well as halophiles and the deep biosphere. Since the mid-20th century, researchers have searched for extraterrestrial life and intelligence. Solar system studies focus on Venus, Mars, Europa, and Titan, while exoplanet discoveries now total 6,022 confirmed planets in 4,490 systems as of October 2025. Depending on the category of search, methods range from analysis of telescope and specimen data to radios used to detect and transmit interstellar communication. Interstellar travel remains largely hypothetical, with only the Voyager 1 and Voyager 2 probes confirmed to have entered the interstellar medium. The concept of extraterrestrial life, especially intelligent life, has greatly influenced culture and fiction. A key debate centers on contacting extraterrestrial intelligence: some advocate active attempts, while others warn it could be risky, given human history of exploiting other societies. Context Initially, after the Big Bang, the universe was too hot to allow life. It is estimated that the temperature of the universe was around 10 billion Kelvin at the one-second mark. Roughly 15 million years later, it cooled to temperate levels, though the elements of organic life were yet nonexistent. The only freely available elements at that point were hydrogen and helium. Carbon and oxygen (and later, water) would not appear until 50 million years later, created through stellar fusion. At that point, the difficulty for life to appear was not the temperature, but the scarcity of free heavy elements. Planetary systems emerged, and the first organic compounds may have formed in the protoplanetary disk of dust grains that would eventually create rocky planets like Earth. Although Earth was in a molten state after its birth and may have burned any organics that fell on it, it would have been more receptive once it cooled down. Once the right conditions on Earth were met, life started by a chemical process known as abiogenesis. Alternatively, life may have formed less frequently, then spread—by meteoroids, for example—between habitable planets in a process called panspermia. During most of its stellar evolution, stars combine hydrogen nuclei to make helium nuclei by stellar fusion, and the comparatively lighter weight of helium allows the star to release the extra energy. The process continues until the star uses all of its available fuel, with the speed of consumption being related to the size of the star. During its last stages, stars start combining helium nuclei to form carbon nuclei. The larger stars can further combine carbon nuclei to create oxygen and silicon, oxygen into neon and sulfur, and so on until iron. Ultimately, the star blows much of its content back into the stellar medium, where it would join clouds that would eventually become new generations of stars and planets. Many of those materials are the raw components of life on Earth. As this process takes place in all the universe, said materials are ubiquitous in the cosmos and not a rarity from the Solar System. Earth is a planet in the Solar System, a planetary system formed by a star at the center, the Sun, and the objects that orbit it: other planets, moons, asteroids, and comets. The sun is part of the Milky Way, a galaxy. The Milky Way is part of the Local Group, a galaxy group that is in turn part of the Laniakea Supercluster. The universe is composed of all similar structures in existence. The immense distances between celestial objects are a difficulty for studying extraterrestrial life. So far, humans have only set foot on the Moon and sent robotic probes to other planets and moons in the Solar System. Although probes can withstand conditions that may be lethal to humans, the distances cause time delays: the New Horizons took nine years after launch to reach Pluto. No probe has ever reached extrasolar planetary systems. The Voyager 2 left the Solar System at a speed of 50,000 kilometers per hour; if it headed towards the Alpha Centauri system, the closest one to Earth at 4.4 light years, it would reach it in 100,000 years. Under current technology, such systems can only be studied by telescopes, which have limitations. It is estimated that dark matter has a larger amount of combined matter than stars and gas clouds, but as it plays no role in the stellar evolution of stars and planets, it is usually not taken into account by astrobiology. There is an area around a star, the circumstellar habitable zone or "Goldilocks zone", wherein water may be at the right temperature to exist in liquid form at a planetary surface. This area is neither too close to the star, where water would become steam, nor too far away, where water would be frozen as ice. However, although useful as an approximation, planetary habitability is complex and defined by several factors. Being in the habitable zone is not enough for a planet to be habitable, not even to actually have such liquid water. Venus is located in the solar system's habitable zone, but does not have liquid water because of the conditions of its atmosphere. Jovian planets or gas giants are not considered habitable even if they orbit close enough to their stars as hot Jupiters, due to crushing atmospheric pressures. The actual distances for the habitable zones vary according to the type of star, and even the solar activity of each specific star influences the local habitability. The type of star also defines the time the habitable zone will exist, as its presence and limits will change along with the star's stellar evolution. The Big Bang occurred 13.8 billion years ago, the Solar System was formed 4.6 billion years ago, and the first hominids appeared 6 million years ago. Life on other planets may have started, evolved, given birth to extraterrestrial intelligences, and perhaps even faced a planetary extinction event millions or billions of years ago. When considered from a cosmic perspective, the brief times of existence of Earth's species may suggest that extraterrestrial life may be equally fleeting under such a scale. During a period of about 7 million years, from about 10 to 17 million years after the Big Bang, the background temperature was between 373 and 273 K (100 and 0 °C; 212 and 32 °F), allowing the possibility of liquid water if any planets existed. Avi Loeb (2014) speculated that primitive life might in principle have appeared during this window, which he called "the Habitable Epoch of the Early Universe". Life on Earth is quite ubiquitous across the planet and has adapted over time to almost all the available environments in it, extremophiles and the deep biosphere thrive at even the most hostile ones. As a result, it is inferred that life in other celestial bodies may be equally adaptive. However, the origin of life is unrelated to its ease of adaptation and may have stricter requirements. A celestial body may not have any life on it, even if it were habitable. Likelihood of existence Life in the cosmos beyond Earth has been observed. The hypothesis of ubiquitous extraterrestrial life relies on three main ideas. The first one, the size of the universe, allows for plenty of planets to have a similar habitability to Earth, and the age of the universe gives enough time for a long process analog to the history of Earth to happen there. The second is that the substances that make life, such as carbon and water, are ubiquitous in the universe. The third is that the physical laws are universal, which means that the forces that would facilitate or prevent the existence of life would be the same ones as on Earth. According to this argument, made by scientists such as Carl Sagan and Stephen Hawking, it would be improbable for life not to exist somewhere else other than Earth. This argument is embodied in the Copernican principle, which states that Earth does not occupy a unique position in the Universe, and the mediocrity principle, which states that there is nothing special about life on Earth. Other authors consider instead that life in the cosmos, or at least multicellular life, may actually be rare. The Rare Earth hypothesis maintains that life on Earth is possible because of a series of factors that range from the location in the galaxy and the configuration of the Solar System to local characteristics of the planet, and that it is unlikely that another planet simultaneously meets all such requirements. The proponents of this hypothesis consider that very little evidence suggests the existence of extraterrestrial life and that, at this point, it is just a desired result and not a reasonable scientific explanation for any gathered data. In 1961, astronomer and astrophysicist Frank Drake devised the Drake equation as a way to stimulate scientific dialogue at a meeting on the search for extraterrestrial intelligence (SETI). The Drake equation is a probabilistic argument used to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way galaxy. The Drake equation is:: xix where: and Drake's proposed estimates are as follows, but numbers on the right side of the equation are agreed as speculative and open to substitution: 10,000 = 5 ⋅ 0.5 ⋅ 2 ⋅ 1 ⋅ 0.2 ⋅ 1 ⋅ 10,000 {\displaystyle 10{,}000=5\cdot 0.5\cdot 2\cdot 1\cdot 0.2\cdot 1\cdot 10{,}000} [better source needed] The Drake equation has proved controversial since, although it is written as a math equation, none of its values were known at the time. Although some values may eventually be measured, others are based on social sciences and are not knowable by their very nature. This does not allow one to make noteworthy conclusions from the equation. Based on observations from the Hubble Space Telescope, there are nearly 2 trillion galaxies in the observable universe. It is estimated that at least ten percent of all Sun-like stars have a system of planets. In other words, there are 6.25×1018 stars with planets orbiting them in the observable universe. Even if it is assumed that only one out of a billion of these stars has planets supporting life, there would be some 6.25 billion life-supporting planetary systems in the observable universe. A 2013 study based on results from the Kepler spacecraft estimated that the Milky Way contains at least as many planets as it does stars, resulting in 100–400 billion exoplanets. The Nebular hypothesis that explains the formation of the Solar System and other planetary systems would suggest that those can have several configurations, and not all of them may have rocky planets within the habitable zone. The apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilisations and the lack of evidence for such civilisations is known as the Fermi paradox. Dennis W. Sciama claimed that life's existence in the universe depends on various fundamental constants. Zhi-Wei Wang and Samuel L. Braunstein suggest that a random universe capable of supporting life is likely to be just barely able to do so, giving a potential explanation to the Fermi paradox. Biochemical basis If extraterrestrial life exists, it could range from simple microorganisms and multicellular organisms similar to animals or plants, to complex alien intelligences akin to humans. When scientists talk about extraterrestrial life, they consider all those types. Although it is possible that extraterrestrial life may have other configurations, scientists use the hierarchy of lifeforms from Earth for simplicity, as it is the only one known to exist. The first basic requirement for life is an environment with non-equilibrium thermodynamics, which means that the thermodynamic equilibrium must be broken by a source of energy. The traditional sources of energy in the cosmos are the stars, such as for life on Earth, which depends on the energy of the sun. However, there are other alternative energy sources, such as volcanoes, plate tectonics, and hydrothermal vents. There are ecosystems on Earth in deep areas of the ocean that do not receive sunlight, and take energy from black smokers instead. Magnetic fields and radioactivity have also been proposed as sources of energy, although they would be less efficient ones. Life on Earth requires water in a liquid state as a solvent in which biochemical reactions take place. It is highly unlikely that an abiogenesis process can start within a gaseous or solid medium: the atom speeds, either too fast or too slow, make it difficult for specific ones to meet and start chemical reactions. A liquid medium also allows the transport of nutrients and substances required for metabolism. Sufficient quantities of carbon and other elements, along with water, might enable the formation of living organisms on terrestrial planets with a chemical make-up and temperature range similar to that of Earth. Life based on ammonia rather than water has been suggested as an alternative, though this solvent appears less suitable than water. It is also conceivable that there are forms of life whose solvent is a liquid hydrocarbon, such as methane, ethane or propane. Another unknown aspect of potential extraterrestrial life would be the chemical elements that would compose it. Life on Earth is largely composed of carbon, but there could be other hypothetical types of biochemistry. A replacement for carbon would need to be able to create complex molecules, store information required for evolution, and be freely available in the medium. To create DNA, RNA, or a close analog, such an element should be able to bind its atoms with many others, creating complex and stable molecules. It should be able to create at least three covalent bonds: two for making long strings and at least a third to add new links and allow for diverse information. Only nine elements meet this requirement: boron, nitrogen, phosphorus, arsenic, antimony (three bonds), carbon, silicon, germanium and tin (four bonds). As for abundance, carbon, nitrogen, and silicon are the most abundant ones in the universe, far more than the others. On Earth's crust the most abundant of those elements is silicon, in the Hydrosphere it is carbon and in the atmosphere, it is carbon and nitrogen. Silicon, however, has disadvantages over carbon. The molecules formed with silicon atoms are less stable, and more vulnerable to acids, oxygen, and light. An ecosystem of silicon-based lifeforms would require very low temperatures, high atmospheric pressure, an atmosphere devoid of oxygen, and a solvent other than water. The low temperatures required would add an extra problem, the difficulty to kickstart a process of abiogenesis to create life in the first place. Norman Horowitz, head of the Jet Propulsion Laboratory bioscience section for the Mariner and Viking missions from 1965 to 1976 considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival of life on other planets. However, he also considered that the conditions found on Mars were incompatible with carbon based life. Even if extraterrestrial life is based on carbon and uses water as a solvent, like Earth life, it may still have a radically different biochemistry. Life is generally considered to be a product of natural selection. It has been proposed that to undergo natural selection a living entity must have the capacity to replicate itself, the capacity to avoid damage/decay, and the capacity to acquire and process resources in support of the first two capacities. Life on Earth may have started with an RNA world and later evolved to its current form, where some of the RNA tasks were transferred to DNA and proteins. Extraterrestrial life may still be stuck using RNA, or evolve into other configurations. It is unclear if our biochemistry is the most efficient one that could be generated, or which elements would follow a similar pattern. However, it is likely that, even if cells had a different composition to those from Earth, they would still have a cell membrane. Life on Earth jumped from prokaryotes to eukaryotes and from unicellular organisms to multicellular organisms through evolution. So far no alternative process to achieve such a result has been conceived, even if hypothetical. Evolution requires life to be divided into individual organisms, and no alternative organisation has been satisfactorily proposed either. At the basic level, membranes define the limit of a cell, between it and its environment, while remaining partially open to exchange energy and resources with it. The evolution from simple cells to eukaryotes, and from them to multicellular lifeforms, is not guaranteed. The Cambrian explosion took place thousands of millions of years after the origin of life, and its causes are not fully known yet. On the other hand, the jump to multicellularity took place several times, which suggests that it could be a case of convergent evolution, and so likely to take place on other planets as well. Palaeontologist Simon Conway Morris considers that convergent evolution would lead to kingdoms similar to our plants and animals, and that many features are likely to develop in alien animals as well, such as bilateral symmetry, limbs, digestive systems and heads with sensory organs. Scientists from the University of Oxford analysed it from the perspective of evolutionary theory and wrote in a study in the International Journal of Astrobiology that aliens may be similar to humans. The planetary context would also have an influence: a planet with higher gravity would have smaller animals, and other types of stars can lead to non-green photosynthesizers. The amount of energy available would also affect biodiversity, as an ecosystem sustained by black smokers or hydrothermal vents would have less energy available than those sustained by a star's light and heat, and so its lifeforms would not grow beyond a certain complexity. There is also research in assessing the capacity of life for developing intelligence. It has been suggested that this capacity arises with the number of potential niches a planet contains, and that the complexity of life itself is reflected in the information density of planetary environments, which in turn can be computed from its niches. It is common knowledge that the conditions on other planets in the solar system, in addition to the many galaxies outside of the Milky Way galaxy, are very harsh and seem to be too extreme to harbor any life. The environmental conditions on these planets can have intense UV radiation paired with extreme temperatures, lack of water, and much more that can lead to conditions that don't seem to favor the creation or maintenance of extraterrestrial life. However, there has been much historical evidence that some of the earliest and most basic forms of life on Earth originated in some extreme environments that seem unlikely to have harbored life at least at one point in Earth's history. Fossil evidence as well as many historical theories backed up by years of research and studies have marked environments like hydrothermal vents or acidic hot springs as some of the first places that life could have originated on Earth. These environments can be considered extreme when compared to the typical ecosystems that the majority of life on Earth now inhabit, as hydrothermal vents are scorching hot due to the magma escaping from the Earth's mantle and meeting the much colder oceanic water. Even in today's world, there can be a diverse population of bacteria found inhabiting the area surrounding these hydrothermal vents which can suggest that some form of life can be supported even in the harshest of environments like the other planets in the solar system. The aspects of these harsh environments that make them ideal for the origin of life on Earth, as well as the possibility of creation of life on other planets, is the chemical reactions forming spontaneously. For example, the hydrothermal vents found on the ocean floor are known to support many chemosynthetic processes which allow organisms to utilize energy through reduced chemical compounds that fix carbon. In return, these reactions will allow for organisms to live in relatively low oxygenated environments while maintaining enough energy to support themselves. The early Earth environment was reducing and therefore, these carbon fixing compounds were necessary for the survival and possible origin of life on Earth. With the little amount of information that scientists have found regarding the atmosphere on other planets in the Milky Way galaxy and beyond, the atmospheres are most likely reducing or with very low oxygen levels, especially when compared with Earth's atmosphere. If there were the necessary elements and ions on these planets, the same carbon fixing, reduced chemical compounds occurring around hydrothermal vents could also occur on these planets' surfaces and possibly result in the origin of extraterrestrial life. Planetary habitability in the Solar System The Solar System has a wide variety of planets, dwarf planets, and moons, and each one is studied for its potential to host life. Each one has its own specific conditions that may benefit or harm life. So far, the only lifeforms found are those from Earth. No extraterrestrial intelligence other than humans exists or has ever existed within the Solar System. Astrobiologist Mary Voytek points out that it would be unlikely to find large ecosystems, as they would have already been detected by now. The inner Solar System is likely devoid of life. However, Venus is still of interest to astrobiologists, as it is a terrestrial planet that was likely similar to Earth in its early stages and developed in a different way. There is a greenhouse effect, the surface is the hottest in the Solar System, sulfuric acid clouds, all surface liquid water is lost, and it has a thick carbon-dioxide atmosphere with huge pressure. Comparing both helps to understand the precise differences that lead to beneficial or harmful conditions for life. And despite the conditions against life on Venus, there are suspicions that microbial life-forms may still survive in high-altitude clouds. Mars is a cold and almost airless desert, inhospitable to life. However, recent studies revealed that water on Mars used to be quite abundant, forming rivers, lakes, and perhaps even oceans. Mars may have been habitable back then, and life on Mars may have been possible. But when the planetary core ceased to generate a magnetic field, solar winds removed the atmosphere and the planet became vulnerable to solar radiation. Ancient life-forms may still have left fossilised remains, and microbes may still survive deep underground. As mentioned, the gas giants and ice giants are unlikely to contain life. The most distant solar system bodies, found in the Kuiper Belt and outwards, are locked in permanent deep-freeze, but cannot be ruled out completely. Although the giant planets themselves are highly unlikely to have life, there is much hope to find it on moons orbiting these planets. Europa, from the Jovian system, has a subsurface ocean below a thick layer of ice. Ganymede and Callisto also have subsurface oceans, but life is less likely in them because water is sandwiched between layers of solid ice. Europa would have contact between the ocean and the rocky surface, which helps the chemical reactions. It may be difficult to dig so deep in order to study those oceans, though. Enceladus, a tiny moon of Saturn with another subsurface ocean, may not need to be dug, as it releases water to space in eruption columns. The space probe Cassini flew inside one of these, but could not make a full study because NASA did not expect this phenomenon and did not equip the probe to study ocean water. Still, Cassini detected complex organic molecules, salts, evidence of hydrothermal activity, hydrogen, and methane. Titan is the only celestial body in the Solar System besides Earth that has liquid bodies on the surface. It has rivers, lakes, and rain of hydrocarbons, methane, and ethane, and even a cycle similar to Earth's water cycle. This special context encourages speculations about lifeforms with different biochemistry, but the cold temperatures would make such chemistry take place at a very slow pace. Water is rock-solid on the surface, but Titan does have a subsurface water ocean like several other moons. However, it is of such a great depth that it would be very difficult to access it for study. Scientific search The science that searches and studies life in the universe, both on Earth and elsewhere, is called astrobiology. With the study of Earth's life, the only known form of life, astrobiology seeks to study how life starts and evolves and the requirements for its continuous existence. This helps to determine what to look for when searching for life in other celestial bodies. This is a complex area of study, and uses the combined perspectives of several scientific disciplines, such as astronomy, biology, chemistry, geology, oceanography, and atmospheric sciences. The scientific search for extraterrestrial life is being carried out both directly and indirectly. As of September 2017[update], 3,667 exoplanets in 2,747 systems have been identified, and other planets and moons in the Solar System hold the potential for hosting primitive life such as microorganisms. As of 8 February 2021, an updated status of studies considering the possible detection of lifeforms on Venus (via phosphine) and Mars (via methane) was reported. Scientists search for biosignatures within the Solar System by studying planetary surfaces and examining meteorites. Some claim to have identified evidence that microbial life has existed on Mars. In 1996, a controversial report stated that structures resembling nanobacteria were discovered in a meteorite, ALH84001, formed of rock ejected from Mars. Although all the unusual properties of the meteorite were eventually explained as the result of inorganic processes, the controversy over its discovery laid the groundwork for the development of astrobiology. An experiment on the two Viking Mars landers reported gas emissions from heated Martian soil samples that some scientists argue are consistent with the presence of living microorganisms. Lack of corroborating evidence from other experiments on the same samples suggests that a non-biological reaction is a more likely hypothesis. In February 2005 NASA scientists reported they may have found some evidence of extraterrestrial life on Mars. The two scientists, Carol Stoker and Larry Lemke of NASA's Ames Research Center, based their claim on methane signatures found in Mars's atmosphere resembling the methane production of some forms of primitive life on Earth, as well as on their own study of primitive life near the Rio Tinto river in Spain. NASA officials soon distanced NASA from the scientists' claims, and Stoker herself backed off from her initial assertions. In November 2011, NASA launched the Mars Science Laboratory that landed the Curiosity rover on Mars. It is designed to assess the past and present habitability on Mars using a variety of scientific instruments. The rover landed on Mars at Gale Crater in August 2012. A group of scientists at Cornell University started a catalog of microorganisms, with the way each one reacts to sunlight. The goal is to help with the search for similar organisms in exoplanets, as the starlight reflected by planets rich in such organisms would have a specific spectrum, unlike that of starlight reflected from lifeless planets. If Earth was studied from afar with this system, it would reveal a shade of green, as a result of the abundance of plants with photosynthesis. In August 2011, NASA studied meteorites found on Antarctica, finding adenine, guanine, hypoxanthine, and xanthine. Adenine and guanine are components of DNA, and the others are used in other biological processes. The studies ruled out pollution of the meteorites on Earth, as those components would not be freely available the way they were found in the samples. This discovery suggests that several organic molecules that serve as building blocks of life may be generated within asteroids and comets. In October 2011, scientists reported that cosmic dust contains complex organic compounds ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars. It is still unclear if those compounds played a role in the creation of life on Earth, but Sun Kwok, of the University of Hong Kong, thinks so. "If this is the case, life on Earth may have had an easier time getting started as these organics can serve as basic ingredients for life." In August 2012, and in a world first, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary IRAS 16293-2422, which is located 400 light years from Earth. Glycolaldehyde is needed to form ribonucleic acid, or RNA, which is similar in function to DNA. This finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation. In December 2023, astronomers reported the first time discovery, in the plumes of Enceladus, moon of the planet Saturn, of hydrogen cyanide, a possible chemical essential for life as we know it, as well as other organic molecules, some of which are yet to be better identified and understood. According to the researchers, "these [newly discovered] compounds could potentially support extant microbial communities or drive complex organic synthesis leading to the origin of life." Although most searches are focused on the biology of extraterrestrial life, an extraterrestrial intelligence capable enough to develop a civilization may be detectable by other means as well. Technology may generate technosignatures, effects on the native planet that may not be caused by natural causes. There are three main types of techno-signatures considered: interstellar communications, effects on the atmosphere, and planetary-sized structures such as Dyson spheres. Organizations such as the SETI Institute search the cosmos for potential forms of communication. They started with radio waves, and now search for laser pulses as well. The challenge for this search is that there are natural sources of such signals as well, such as gamma-ray bursts and supernovae, and the difference between a natural signal and an artificial one would be in its specific patterns. Astronomers intend to use artificial intelligence for this, as it can manage large amounts of data and is devoid of biases and preconceptions. Besides, even if there is an advanced extraterrestrial civilization, there is no guarantee that it is transmitting radio communications in the direction of Earth. The length of time required for a signal to travel across space means that a potential answer may arrive decades or centuries after the initial message. The atmosphere of Earth is rich in nitrogen dioxide as a result of air pollution, which can be detectable. The natural abundance of carbon, which is also relatively reactive, makes it likely to be a basic component of the development of a potential extraterrestrial technological civilization, as it is on Earth. Fossil fuels may likely be generated and used on such worlds as well. The abundance of chlorofluorocarbons in the atmosphere can also be a clear technosignature, considering their role in ozone depletion. Light pollution may be another technosignature, as multiple lights on the night side of a rocky planet can be a sign of advanced technological development. However, modern telescopes are not strong enough to study exoplanets with the required level of detail to perceive it. The Kardashev scale proposes that a civilization may eventually start consuming energy directly from its local star. This would require giant structures built next to it, called Dyson spheres. Those speculative structures would cause an excess infrared radiation, that telescopes may notice. The infrared radiation is typical of young stars, surrounded by dusty protoplanetary disks that will eventually form planets. An older star such as the Sun would have no natural reason to have excess infrared radiation. The presence of heavy elements in a star's light-spectrum is another potential biosignature; such elements would (in theory) be found if the star were being used as an incinerator/repository for nuclear waste products. Some astronomers search for extrasolar planets that may be conducive to life, narrowing the search to terrestrial planets within the habitable zones of their stars. Since 1992, over four thousand exoplanets have been discovered (6,128 planets in 4,584 planetary systems including 1,017 multiple planetary systems as of 30 October 2025). The extrasolar planets so far discovered range in size from that of terrestrial planets similar to Earth's size to that of gas giants larger than Jupiter. The number of observed exoplanets is expected to increase greatly in the coming years.[better source needed] The Kepler space telescope has also detected a few thousand candidate planets, of which about 11% may be false positives. There is at least one planet on average per star. About 1 in 5 Sun-like stars[a] have an "Earth-sized"[b] planet in the habitable zone,[c] with the nearest expected to be within 12 light-years distance from Earth. Assuming 200 billion stars in the Milky Way,[d] that would be 11 billion potentially habitable Earth-sized planets in the Milky Way, rising to 40 billion if red dwarfs are included. The rogue planets in the Milky Way possibly number in the trillions. The nearest known exoplanet is Proxima Centauri b, located 4.2 light-years (1.3 pc) from Earth in the southern constellation of Centaurus. As of March 2014[update], the least massive exoplanet known is PSR B1257+12 A, which is about twice the mass of the Moon. The most massive planet listed on the NASA Exoplanet Archive is DENIS-P J082303.1−491201 b, about 29 times the mass of Jupiter, although according to most definitions of a planet, it is too massive to be a planet and may be a brown dwarf instead. Almost all of the planets detected so far are within the Milky Way, but there have also been a few possible detections of extragalactic planets. The study of planetary habitability also considers a wide range of other factors in determining the suitability of a planet for hosting life. One sign that a planet probably already contains life is the presence of an atmosphere with significant amounts of oxygen, since that gas is highly reactive and generally would not last long without constant replenishment. This replenishment occurs on Earth through photosynthetic organisms. One way to analyse the atmosphere of an exoplanet is through spectrography when it transits its star, though this might only be feasible with dim stars like white dwarfs. History and cultural impact The modern concept of extraterrestrial life is based on assumptions that were not commonplace during the early days of astronomy. The first explanations for the celestial objects seen in the night sky were based on mythology. Scholars from Ancient Greece were the first to consider that the universe is inherently understandable and rejected explanations based on supernatural incomprehensible forces, such as the myth of the Sun being pulled across the sky in the chariot of Apollo. They had not developed the scientific method yet and based their ideas on pure thought and speculation, but they developed precursor ideas to it, such as that explanations had to be discarded if they contradict observable facts. The discussions of those Greek scholars established many of the pillars that would eventually lead to the idea of extraterrestrial life, such as Earth being round and not flat. The cosmos was first structured in a geocentric model that considered that the sun and all other celestial bodies revolve around Earth. However, they did not consider them as worlds. In Greek understanding, the world was composed by both Earth and the celestial objects with noticeable movements. Anaximander thought that the cosmos was made from apeiron, a substance that created the world, and that the world would eventually return to the cosmos. Eventually two groups emerged, the atomists that thought that matter at both Earth and the cosmos was equally made of small atoms of the classical elements (earth, water, fire and air), and the Aristotelians who thought that those elements were exclusive of Earth and that the cosmos was made of a fifth one, the aether. Atomist Epicurus thought that the processes that created the world, its animals and plants should have created other worlds elsewhere, along with their own animals and plants. Aristotle thought instead that all the earth element naturally fell towards the center of the universe, and that would make it impossible for other planets to exist elsewhere. Under that reasoning, Earth was not only in the center, it was also the only planet in the universe. Cosmic pluralism, the plurality of worlds, or simply pluralism, describes the philosophical belief in numerous "worlds" in addition to Earth, which might harbor extraterrestrial life. The earliest recorded assertion of extraterrestrial human life is found in ancient scriptures of Jainism. There are multiple "worlds" mentioned in Jain scriptures that support human life. These include, among others, Bharat Kshetra, Mahavideh Kshetra, Airavat Kshetra, and Hari kshetra. Medieval Muslim writers like Fakhr al-Din al-Razi and Muhammad al-Baqir supported cosmic pluralism on the basis of the Qur'an. Chaucer's poem The House of Fame engaged in medieval thought experiments that postulated the plurality of worlds. However, those ideas about other worlds were different from the current knowledge about the structure of the universe, and did not postulate the existence of planetary systems other than the Solar System. When those authors talk about other worlds, they talk about places located at the center of their own systems, and with their own stellar vaults and cosmos surrounding them. The Greek ideas and the disputes between atomists and Aristotelians outlived the fall of the Greek empire. The Great Library of Alexandria compiled information about it, part of which was translated by Islamic scholars and thus survived the end of the Library. Baghdad combined the knowledge of the Greeks, the Indians, the Chinese and its own scholars, and the knowledge expanded through the Byzantine Empire. From there it eventually returned to Europe by the time of the Middle Ages. However, as the Greek atomist doctrine held that the world was created by random movements of atoms, with no need for a creator deity, it became associated with atheism, and the dispute intertwined with religious ones. Still, the Church did not react to those topics in a homogeneous way, and there were stricter and more permissive views within the church itself. The first known mention of the term 'panspermia' was in the writings of the 5th-century BC Greek philosopher Anaxagoras. He proposed the idea that life exists everywhere. By the time of the late Middle Ages there were many known inaccuracies in the geocentric model, but it was kept in use because naked eye observations provided limited data. Nicolaus Copernicus started the Copernican Revolution by proposing that the planets revolve around the sun rather than Earth. His proposal had little acceptance at first because, as he kept the assumption that orbits were perfect circles, his model led to as many inaccuracies as the geocentric one. Tycho Brahe improved the available data with naked-eye observatories, which worked with highly complex sextants and quadrants. Tycho could not make sense of his observations, but Johannes Kepler did: orbits were not perfect circles, but ellipses. This knowledge benefited the Copernican model, which worked now almost perfectly. The invention of the telescope a short time later, perfected by Galileo Galilei, clarified the final doubts, and the paradigm shift was completed. Under this new understanding, the notion of extraterrestrial life became feasible: if Earth is but just a planet orbiting around a star, there may be planets similar to Earth elsewhere. The astronomical study of distant bodies also proved that physical laws are the same elsewhere in the universe as on Earth, with nothing making the planet truly special. The new ideas were met with resistance from the Catholic church. Galileo was tried for the heliocentric model, which was considered heretical, and forced to recant it. The best-known early-modern proponent of ideas of extraterrestrial life was the Italian philosopher Giordano Bruno, who argued in the 16th century for an infinite universe in which every star is surrounded by its own planetary system. Bruno wrote that other worlds "have no less virtue nor a nature different to that of our earth" and, like Earth, "contain animals and inhabitants". Bruno's belief in the plurality of worlds was one of the charges leveled against him by the Venetian Holy Inquisition, which tried and executed him. The heliocentric model was further strengthened by the postulation of the theory of gravity by Sir Isaac Newton. This theory provided the mathematics that explains the motions of all things in the universe, including planetary orbits. By this point, the geocentric model was definitely discarded. By this time, the use of the scientific method had become a standard, and new discoveries were expected to provide evidence and rigorous mathematical explanations. Science also took a deeper interest in the mechanics of natural phenomena, trying to explain not just the way nature works but also the reasons for working that way. There was very little actual discussion about extraterrestrial life before this point, as the Aristotelian ideas remained influential while geocentrism was still accepted. When it was finally proved wrong, it not only meant that Earth was not the center of the universe, but also that the lights seen in the sky were not just lights, but physical objects. The notion that life may exist in them as well soon became an ongoing topic of discussion, although one with no practical ways to investigate. The possibility of extraterrestrials remained a widespread speculation as scientific discovery accelerated. William Herschel, the discoverer of Uranus, was one of many 18th–19th-century astronomers who believed that the Solar System is populated by alien life. Other scholars of the period who championed "cosmic pluralism" included Immanuel Kant and Benjamin Franklin. At the height of the Enlightenment, even the Sun and Moon were considered candidates for extraterrestrial inhabitants. Speculation about life on Mars increased in the late 19th century, following telescopic observation of apparent Martian canals – which soon, however, turned out to be optical illusions. Despite this, in 1895, American astronomer Percival Lowell published his book Mars, followed by Mars and its Canals in 1906, proposing that the canals were the work of a long-gone civilisation. Spectroscopic analysis of Mars's atmosphere began in earnest in 1894, when U.S. astronomer William Wallace Campbell showed that neither water nor oxygen was present in the Martian atmosphere. By 1909 better telescopes and the best perihelic opposition of Mars since 1877 conclusively put an end to the canal hypothesis. As a consequence of the belief in the spontaneous generation there was little thought about the conditions of each celestial body: it was simply assumed that life would thrive anywhere. This theory was disproved by Louis Pasteur in the 19th century. Popular belief in thriving alien civilisations elsewhere in the solar system still remained strong until Mariner 4 and Mariner 9 provided close images of Mars, which debunked forever the idea of the existence of Martians and decreased the previous expectations of finding alien life in general. The end of the spontaneous generation belief forced investigation into the origin of life. Although abiogenesis is the more accepted theory, a number of authors reclaimed the term "panspermia" and proposed that life was brought to Earth from elsewhere. Some of those authors are Jöns Jacob Berzelius (1834), Kelvin (1871), Hermann von Helmholtz (1879) and, somewhat later, by Svante Arrhenius (1903). The science fiction genre, although not so named during the time, developed during the late 19th century. The expansion of the genre of extraterrestrials in fiction influenced the popular perception over the real-life topic, making people eager to jump to conclusions about the discovery of aliens. Science marched at a slower pace, some discoveries fueled expectations and others dashed excessive hopes. For example, with the advent of telescopes, most structures seen on the Moon or Mars were immediately attributed to Selenites or Martians, and later ones (such as more powerful telescopes) revealed that all such discoveries were natural features. A famous case is the Cydonia region of Mars, first imaged by the Viking 1 orbiter. The low-resolution photos showed a rock formation that resembled a human face, but later spacecraft took photos in higher detail that showed that there was nothing special about the site. The search and study of extraterrestrial life became a science of its own, astrobiology. Also known as exobiology, this discipline is studied by the NASA, the ESA, the INAF, and others. Astrobiology studies life from Earth as well, but with a cosmic perspective. For example, abiogenesis is of interest to astrobiology, not because of the origin of life on Earth, but for the chances of a similar process taking place in other celestial bodies. Many aspects of life, from its definition to its chemistry, are analyzed as either likely to be similar in all forms of life across the cosmos or only native to Earth. Astrobiology, however, remains constrained by the current lack of extraterrestrial life-forms to study, as all life on Earth comes from the same ancestor, and it is hard to infer general characteristics from a group with a single example to analyse. The 20th century came with great technological advances, speculations about future hypothetical technologies, and an increased basic knowledge of science by the general population thanks to science divulgation through the mass media. The public interest in extraterrestrial life and the lack of discoveries by mainstream science led to the emergence of pseudosciences that provided affirmative, if questionable, answers to the existence of aliens. Ufology claims that many unidentified flying objects (UFOs) would be spaceships from alien species, and ancient astronauts hypothesis claim that aliens would have visited Earth in antiquity and prehistoric times but people would have failed to understand it by then. Most UFOs or UFO sightings can be readily explained as sightings of Earth-based aircraft (including top-secret aircraft), known astronomical objects or weather phenomenons, or as hoaxes. Looking beyond the pseudosciences, Lewis White Beck strove to elevate the level of public discourse on the topic of extraterrestrial life by tracing the evolution of philosophical thought over the centuries from ancient times into the modern era. His review of the contributions made by Lucretius, Plutarch, Aristotle, Copernicus, Immanuel Kant, John Wilkins, Charles Darwin and Karl Marx demonstrated that even in modern times, humanity could be profoundly influenced in its search for extraterrestrial life by subtle and comforting archetypal ideas which are largely derived from firmly held religious, philosophical and existential belief systems. On a positive note, however, Beck further argued that even if the search for extraterrestrial life proves to be unsuccessful, the endeavor itself could have beneficial consequences by assisting humanity in its attempt to actualize superior ways of living here on Earth. By the 21st century, it was accepted that multicellular life in the Solar System can only exist on Earth, but the interest in extraterrestrial life increased regardless. This is a result of the advances in several sciences. The knowledge of planetary habitability allows to consider on scientific terms the likelihood of finding life at each specific celestial body, as it is known which features are beneficial and harmful for life. Astronomy and telescopes also improved to the point exoplanets can be confirmed and even studied, increasing the number of search places. Life may still exist elsewhere in the Solar System in unicellular form, but the advances in spacecraft allow to send robots to study samples in situ, with tools of growing complexity and reliability. Although no extraterrestrial life has been found and life may still be just a rarity from Earth, there are scientific reasons to suspect that it can exist elsewhere, and technological advances that may detect it if it does. Many scientists are optimistic about the chances of finding alien life. In the words of SETI's Frank Drake, "All we know for sure is that the sky is not littered with powerful microwave transmitters". Drake noted that it is entirely possible that advanced technology results in communication being carried out in some way other than conventional radio transmission. At the same time, the data returned by space probes, and giant strides in detection methods, have allowed science to begin delineating habitability criteria on other worlds, and to confirm that at least other planets are plentiful, though aliens remain a question mark. The Wow! signal, detected in 1977 by a SETI project, remains a subject of speculative debate. On the other hand, other scientists are pessimistic. Jacques Monod wrote that "Man knows at last that he is alone in the indifferent immensity of the universe, whence which he has emerged by chance". In 2000, geologist and paleontologist Peter Ward and astrobiologist Donald Brownlee published a book entitled Rare Earth: Why Complex Life is Uncommon in the Universe.[better source needed] In it, they discussed the Rare Earth hypothesis, in which they claim that Earth-like life is rare in the universe, whereas microbial life is common. Ward and Brownlee are open to the idea of evolution on other planets that is not based on essential Earth-like characteristics such as DNA and carbon. As for the possible risks, theoretical physicist Stephen Hawking warned in 2010 that humans should not try to contact alien life forms. He warned that aliens might pillage Earth for resources. "If aliens visit us, the outcome would be much as when Columbus landed in America, which didn't turn out well for the Native Americans", he said. Jared Diamond had earlier expressed similar concerns. On 20 July 2015, Hawking and Russian billionaire Yuri Milner, along with the SETI Institute, announced a well-funded effort, called the Breakthrough Initiatives, to expand efforts to search for extraterrestrial life. The group contracted the services of the 100-meter Robert C. Byrd Green Bank Telescope in West Virginia in the United States and the 64-meter Parkes Telescope in New South Wales, Australia. On 13 February 2015, scientists (including Geoffrey Marcy, Seth Shostak, Frank Drake and David Brin) at a convention of the American Association for the Advancement of Science, discussed Active SETI and whether transmitting a message to possible intelligent extraterrestrials in the Cosmos was a good idea; one result was a statement, signed by many, that a "worldwide scientific, political and humanitarian discussion must occur before any message is sent". Government responses The 1967 Outer Space Treaty and the 1979 Moon Agreement define rules of planetary protection against potentially hazardous extraterrestrial life. COSPAR also provides guidelines for planetary protection. A committee of the United Nations Office for Outer Space Affairs had in 1977 discussed for a year strategies for interacting with extraterrestrial life or intelligence. The discussion ended without any conclusions. As of 2010, the UN lacks response mechanisms for the case of an extraterrestrial contact. One of the NASA divisions is the Office of Safety and Mission Assurance (OSMA), also known as the Planetary Protection Office. A part of its mission is to "rigorously preclude backward contamination of Earth by extraterrestrial life." In 2016, the Chinese Government released a white paper detailing its space program. According to the document, one of the research objectives of the program is the search for extraterrestrial life. It is also one of the objectives of the Chinese Five-hundred-meter Aperture Spherical Telescope (FAST) program. In 2020, Dmitry Rogozin, the head of the Russian space agency, said the search for extraterrestrial life is one of the main goals of deep space research. He also acknowledged the possibility of existence of primitive life on other planets of the Solar System. The French space agency has an office for the study of "non-identified aero spatial phenomena". The agency is maintaining a publicly accessible database of such phenomena, with over 1600 detailed entries. According to the head of the office, the vast majority of entries have a mundane explanation; but for 25% of entries, their extraterrestrial origin can neither be confirmed nor denied. In 2020, chairman of the Israel Space Agency Isaac Ben-Israel stated that the probability of detecting life in outer space is "quite large". But he disagrees with his former colleague Haim Eshed who stated that there are contacts between an advanced alien civilisation and some of Earth's governments. In fiction Although the idea of extraterrestrial peoples became feasible once astronomy developed enough to understand the nature of planets, they were not thought of as being any different from humans. Having no scientific explanation for the origin of mankind and its relation to other species, there was no reason to expect them to be any other way. This was changed by the 1859 book On the Origin of Species by Charles Darwin, which proposed the theory of evolution. Now with the notion that evolution on other planets may take other directions, science fiction authors created bizarre aliens, clearly distinct from humans. A usual way to do that was to add body features from other animals, such as insects or octopuses. Costuming and special effects feasibility alongside budget considerations forced films and TV series to tone down the fantasy, but these limitations lessened since the 1990s with the advent of computer-generated imagery (CGI), and later on as CGI became more effective and less expensive. Real-life events sometimes captivate people's imagination and this influences the works of fiction. For example, during the Barney and Betty Hill incident, the first recorded claim of an alien abduction, the couple reported that they were abducted and experimented on by aliens with oversized heads, big eyes, pale grey skin, and small noses, a description that eventually became the grey alien archetype once used in works of fiction. See also Notes References Further reading External links
========================================