text
stringlengths
0
473k
[SOURCE: https://en.wikipedia.org/wiki/Toss_Across] | [TOKENS: 1069]
Contents Toss Across Toss Across is a game first introduced in 1969 by the now defunct Ideal Toy Company. The game was designed by Marvin Glass and Associates and created by Hank Kramer, Larry Reiner and Walter Moe, and is now distributed by Mattel. It is a game where participants play tic-tac-toe by lobbing small beanbags at targets in an attempt to change the targets to their desired letter. As in traditional tic-tac-toe, the first player to get three of their letters in a row wins the game. There are other similar games to Toss Across known under different names, like Tic Tac Throw. The targets are three-sided blocks situated on a frame such that the impact of the beanbags can turn the block, changing the letter. Each block has a blank side, an X, and an O. Modern boards are entirely plastic, less than a meter square. Six beanbags are included with the game. Rules The official rules as included with the game call for the X player to go first. Each player starts with three beanbags. Players stand approximately six feet from the board to toss their beanbags, alternating turns. The beanbags are only retrieved after all six are thrown. Whenever three matching symbols in a row are turned over by either player, the game ends immediately. Multiple players may participate by dividing into two teams. (Turn order: 1. Player 1 from Team A, 2. Player 1 from Team B, 3. Player 2 from Team A, 4. Player 2 from Team B.) Variations: Luck versus skill The strength of Toss Across is its balance between luck and skill. While the game generally rewards accurate tossing and effective strategy, there are elements of luck to the game. Even if a player succeeds in hitting the square they wish to change, it is often impossible to control the effect one's beanbag impact will have. Blocks may turn quickly, spinning a few times before coming to rest, at which point any side may be facing up. Players can be frustrated by hitting the square they want, only to change it to their opponent's letter. Furthermore, the low quality of the plastic board can punish players. Blocks do not always turn, even when impacted. Occasionally a beanbag toss can affect two (or in rare cases, more) squares simultaneously. Skillful players can attempt to do this on purpose, but more often it happens without intent. With practice, players can become increasingly adept at hitting the precise location on the board that they wish, enabling them a better chance of affecting the square they want to change. In the original 1969 edition of the game, the pieces in the squares did not turn freely, but could instead only turn one face in either direction from neutral. This is why the neutral squares had small X and O decals on them; if you hit the small X, the square would turn to X and stop. To turn it off of X, a bag had to strike the other half of the square. It might then turn back to neutral, or to O. The original game had a much stronger emphasis on skill than the current version. Strategy Unlike traditional tic-tac-toe, a letter placed in a given square may not stay there the rest of the game. A square may change letters multiple times before the game is resolved. This highlights the major game strategy that it is often more effective to attempt to remove opponents' letters than to block them from getting three in a row. In traditional tic-tac-toe, preventing a player from getting three letters in a row is accomplished by placing your symbol in the way such that they cannot get three in a row in a given direction. While this is entirely possible in Toss Across, the somewhat arbitrary effect of hitting a square can work against players. For example: Joe (playing O) observes that Jane (playing X) has changed both bottom corner squares to X. In traditional tic-tac-toe, Joe would place an O in the bottom center square to block her. However, in Toss Across, unless Joe is confident he can control his throw to the degree that hitting the bottom center block will definitely change it to an O, this is not the best move. There is a 1-in-3 chance that hitting the block will change it to X, giving Jane the victory. Moreover, even if Joe achieves his best possible outcome and changes the block to O, Jane will still aim for this block to change it to X. Joe's move is essentially wasted. Instead, Joe aims for one of Jane's established corners. If he hits it, there is a 2-in-3 chance that he will remove the X from the board, and he even has a 1-in-3 chance to flip the square to O, radically altering Jane's strategy. While no method of tossing the beanbag is guaranteed to have the desired effect, tossing the bag vertically (such that it rotates corner over corner) rather than horizontally (such that it stays mostly flat) tends to create more significant impacts and more often turns the blocks. Although no official rules prevent overhand tossing, some players ban this practice merely to prevent high-power throws at the board, which is not particularly sturdy. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Ayman_Odeh] | [TOKENS: 2067]
Contents Ayman Odeh Ayman Aadil Odeh (Arabic: أيمن عادل عودة, Hebrew: אַיְּימָן עַאדֶל עוֹדֶה; born 1 January 1975) is an Arab Israeli lawyer and politician. He is a member of Knesset and leader of the Hadash alliance. Biography Odeh was born in 1975, and raised in Haifa, within the Kababir neighbourhood. His father was a construction worker. Although his clan mostly belongs to the Ahmadiyya, Odeh's parents, who were Sunni Muslims, sent him to a Christian school where he was the only Muslim student, proudly noting that he got an A in New Testament studies on his high school final exams. He now describes himself as atheist, having "transcended the confines of religion and ethnicity". He studied law at the University of Craiova in Romania from 1993 to 1997. During his law studies in Romania, he took part in pro-Palestinian rallies, learned Romanian, and read the memoirs of various political thinkers and revolutionaries. He earned a Bachelor of Laws degree from the University of Craiova and in 2001 was certified to practice law in Israel, though he is not a member of the Israel Bar Association.[citation needed] Odeh met his wife Nardine Aseli, a gynecologist, at the wake for her 17-year-old brother who was killed in 2000 at the start of the Second Intifada. They married in 2005 and have three children. He speaks Arabic, Hebrew, English and Romanian. Political career Odeh joined Hadash, and represented it on Haifa City Council between 1998 and 2005, before becoming the party's secretary-general in 2006. He was placed 75th on the party's list for the 2009 elections, in which Hadash won four seats. He won sixth place on the party's list for the 2013 Knesset elections, but failed to enter the Knesset, as the party again won four seats. Following the announcement that Hadash leader Mohammed Barakeh was resigning prior to the 2015 elections, Odeh was elected as the party's new leader. In the buildup to the 2015 elections, Hadash joined the Joint List, an alliance of the main Arab parties. Odeh was placed at the head of the Joint List's electoral list. Analysts credited the charismatic Odeh for giving the Arab political union a more moderate, pragmatic face. Odeh was elected to the 20th Knesset, along with 12 other candidates from the Joint List. In an interview with The Times of Israel, Odeh discussed the Joint List's social agenda, including a 10-year plan to tackle issues pertinent to the Arab sector, such as employment of women, rehabilitation of failing regional councils, recognition of unrecognized Bedouin communities in the Negev, public transportation in Arab towns, and eradication of violence. He also said he supported the right of the Jewish people to self-determination in Israel, adding that a Palestinian state should fulfill the same goals for Arab Palestinians. Odeh's campaign for the March 2015 elections had a "breakthrough moment" when, in a televised debate of candidates, Avigdor Lieberman, Israel's foreign minister, called Odeh a "Palestinian citizen" and said Odeh was not welcome in Israel. Odeh replied, "I am very welcome in my homeland. I am part of the nature, the surroundings, the landscape", contrasting his birth in Israel with Lieberman's immigration from the former Soviet Union. Odeh is now viewed as a potential power broker given that Arab parties appear to be uniting to meet the government's requirement that parties meet a minimum threshold of votes to secure a place in the Knesset. Odeh has a style that contrasts with that of MK Haneen Zoabi, who is more confrontational. Odeh voices his willingness to work with Jewish partners, and he often quotes Martin Luther King Jr. In the 2020 election, Odeh and the Joint List recommended Benny Gantz for prime minister. Odeh had stated that he was "happy for the release of hostages and prisoners" following the 2023 Gaza war ceasefire. A petition was created in January 2025 to expel him from the Knesset. After Odeh later said in a May speech that "Gaza has won and Gaza will win", the petition met the required number of signatures to move forward. It was brought forth in June 2025 by the Knesset House Committee, which was approved by 70 MKs (including 10 from the opposition). A Knesset vote in July to expel Odeh failed with 73 MKs in favor, 15 against and MKs from United Torah Judaism, Blue and White and most of Yesh Atid abstaining. A threshold of 90 votes was needed for it to pass. Award and recognition Views and opinions Odeh says his service on Haifa City Council made it clear to him that Arabs and Jews must work together. He describes Haifa as "the most liberal multicultural yet homogenous city in Israel". Odeh has also expressed strong support for increasing recognition of Mizrahi culture and Arab Jewish history in official Israeli and Palestinian discourses; in a widely cited speech to the Knesset plenum in July 2015, MK Odeh argued that the State of Israel has systematically discriminated against and suppressed the culture of Jews who immigrated to Israel from Arab and Muslim lands (who make up the majority of the Israeli population [citation needed]) in order to feed the idea of a natural separation between Jews and Arabs. He also argued that the large role played by Jews in forming historical and modern Arab culture (including famous Jews such as Rabbi David Buzaglo, who wrote Jewish religious poetry primarily in Arabic, and famous Jews who were popular in the Arab world in the mid-20th century, such as Leila Mourad), has been forgotten by Jews and Arabs alike due to the ideological elements of the Arab–Israeli conflict, and the desire by Israel's elite to portray a Western image of Jews and of the country. Odeh called upon Jewish and Arab members of the Knesset alike to support a new Knesset committee (which he had joined as a member) lobbying for the re-emphasizing of the culture of Jews from Arab and Muslim lands.[citation needed] In that speech, Odeh summarized his position thus: "The culture of the Jews of Arab and Islamic countries is a shared Jewish and Arab culture. Because of this, the state has fought [against] it, and yet because of this [same reason], we must fight to strengthen it."[citation needed] Odeh says, "We represent those who are invisible in this country, and we give them a voice. We also bring a message of hope to all people, not just to the Arabs, but to the Jews, too". In October 2015, Odeh gave support to the "unarmed Palestinian struggle". However, when asked about "throwing rocks, ... firebombs, and shooting at cars", Odeh responded that regarding throwing rocks, he supported the First Intifada. In February 2016, Odeh considered resigning from the Knesset to show his protest against a controversial MK suspension bill. Controversy Israel's internal intelligence agency, the Shin Bet, has interrogated Odeh many times in the past. He said in an interview to The New Yorker: "I was called three more times by the Shin Bet. They never hit me. But they succeeded in two things. I isolated myself from my friends—I became much more introverted. And I had the sense the Shin Bet was watching me no matter where I went. When I went to the bus station, and I saw some guy in sunglasses, I just assumed he was Shin Bet." A right-wing activist was arrested in February 2016 for making death threats against Odeh. On 18 January 2017, Odeh was allegedly shot by a sponge-tipped bullet in the forehead by Israel Police as he protested against the demolition of homes in the Bedouin village of Umm al-Hiran. The police initially claimed that he was hit by stones thrown by other protestors. It later backtracked, claiming both that it had never stated that Odeh was hit by stones and that it didn't know what caused Odeh's head injury. The British Forensic Architecture, led by Eyal Weizman of Goldsmiths, University of London, which analyzed video evidence of the incident strongly suspected that Odeh had been hit by a sponge-tipped bullet because 47 seconds of video had been redacted – precisely the time during which Odeh was injured. In November 2024, Odeh was ejected from the Knesset after accusing Prime Minister Netanyahu of being a "serial killer of peace". During his speech, Odeh recounted the story of a man whose rented apartment was destroyed by an Israeli airstrike while he was obtaining birth certificates for his newborn twins, killing both infants and their mother. "What is your vision? A serial killer of peace?" Odeh said before being forcibly removed from the podium. On 22 May 2025, he condemned the Gaza war and was again, forcefully dragged out of the Knesset. On 13 October 2025, during U.S. President Donald Trump's speech as part of his visit to the Knesset during the Gaza peace plan, Odeh along with Knesset member Ofer Cassif held signs demanding Palestinian recognition, and called Trump a "terrorist". They were both forcefully removed from the Knesset. References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/OpenAI#cite_ref-34] | [TOKENS: 8773]
Contents OpenAI OpenAI is an American artificial intelligence research organization comprising both a non-profit foundation and a controlled for-profit public benefit corporation (PBC), headquartered in San Francisco. It aims to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". OpenAI is widely recognized for its development of the GPT family of large language models, the DALL-E series of text-to-image models, and the Sora series of text-to-video models, which have influenced industry research and commercial applications. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI. The organization was founded in 2015 in Delaware but evolved a complex corporate structure. As of October 2025, following restructuring approved by California and Delaware regulators, the non-profit OpenAI Foundation holds 26% of the for-profit OpenAI Group PBC, with Microsoft holding 27% and employees/other investors holding 47%. Under its governance arrangements, the OpenAI Foundation holds the authority to appoint the board of the for-profit OpenAI Group PBC, a mechanism designed to align the entity’s strategic direction with the Foundation’s charter. Microsoft previously invested over $13 billion into OpenAI, and provides Azure cloud computing resources. In October 2025, OpenAI conducted a $6.6 billion share sale that valued the company at $500 billion. In 2023 and 2024, OpenAI faced multiple lawsuits for alleged copyright infringement against authors and media companies whose work was used to train some of OpenAI's products. In November 2023, OpenAI's board removed Sam Altman as CEO, citing a lack of confidence in him, but reinstated him five days later following a reconstruction of the board. Throughout 2024, roughly half of then-employed AI safety researchers left OpenAI, citing the company's prominent role in an industry-wide problem. Founding In December 2015, OpenAI was founded as a not for profit organization by Sam Altman, Elon Musk, Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk as the co-chairs. A total of $1 billion in capital was pledged by Sam Altman, Greg Brockman, Elon Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), and Infosys. However, the actual capital collected significantly lagged pledges. According to company disclosures, only $130 million had been received by 2019. In its founding charter, OpenAI stated an intention to collaborate openly with other institutions by making certain patents and research publicly available, but later restricted access to its most capable models, citing competitive and safety concerns. OpenAI was initially run from Brockman's living room. It was later headquartered at the Pioneer Building in the Mission District, San Francisco. According to OpenAI's charter, its founding mission is "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity." Musk and Altman stated in 2015 that they were partly motivated by concerns about AI safety and existential risk from artificial general intelligence. OpenAI stated that "it's hard to fathom how much human-level AI could benefit society", and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly". The startup also wrote that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible", and that "because of AI's surprising history, it's hard to predict when human-level AI might come within reach. When it does, it'll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest." Co-chair Sam Altman expected a decades-long project that eventually surpasses human intelligence. Brockman met with Yoshua Bengio, one of the "founding fathers" of deep learning, and drew up a list of great AI researchers. Brockman was able to hire nine of them as the first employees in December 2015. OpenAI did not pay AI researchers salaries comparable to those of Facebook or Google. It also did not pay stock options which AI researchers typically get. Nevertheless, OpenAI spent $7 million on its first 52 employees in 2016. OpenAI's potential and mission drew these researchers to the firm; a Google employee said he was willing to leave Google for OpenAI "partly because of the very strong group of people and, to a very large extent, because of its mission." OpenAI co-founder Wojciech Zaremba stated that he turned down "borderline crazy" offers of two to three times his market value to join OpenAI instead. In April 2016, OpenAI released a public beta of "OpenAI Gym", its platform for reinforcement learning research. Nvidia gifted its first DGX-1 supercomputer to OpenAI in August 2016 to help it train larger and more complex AI models with the capability of reducing processing time from six days to two hours. In December 2016, OpenAI released "Universe", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites, and other applications. Corporate structure In 2019, OpenAI transitioned from non-profit to "capped" for-profit, with the profit being capped at 100 times any investment. According to OpenAI, the capped-profit model allows OpenAI Global, LLC to legally attract investment from venture funds and, in addition, to grant employees stakes in the company. Many top researchers work for Google Brain, DeepMind, or Facebook, which offer equity that a nonprofit would be unable to match. Before the transition, OpenAI was legally required to publicly disclose the compensation of its top employees. The company then distributed equity to its employees and partnered with Microsoft, announcing an investment package of $1 billion into the company. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft. OpenAI Global, LLC then announced its intention to commercially license its technologies. It planned to spend $1 billion "within five years, and possibly much faster". Altman stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need "more capital than any non-profit has ever raised" to achieve artificial general intelligence. The nonprofit, OpenAI, Inc., is the sole controlling shareholder of OpenAI Global, LLC, which, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI, Inc.'s nonprofit charter. A majority of OpenAI, Inc.'s board is barred from having financial stakes in OpenAI Global, LLC. In addition, minority members with a stake in OpenAI Global, LLC are barred from certain votes due to conflict of interest. Some researchers have argued that OpenAI Global, LLC's switch to for-profit status is inconsistent with OpenAI's claims to be "democratizing" AI. On February 29, 2024, Elon Musk filed a lawsuit against OpenAI and CEO Sam Altman, accusing them of shifting focus from public benefit to profit maximization—a case OpenAI dismissed as "incoherent" and "frivolous," though Musk later revived legal action against Altman and others in August. On April 9, 2024, OpenAI countersued Musk in federal court, alleging that he had engaged in "bad-faith tactics" to slow the company's progress and seize its innovations for his personal benefit. OpenAI also argued that Musk had previously supported the creation of a for-profit structure and had expressed interest in controlling OpenAI himself. The countersuit seeks damages and legal measures to prevent further alleged interference. On February 10, 2025, a consortium of investors led by Elon Musk submitted a $97.4 billion unsolicited bid to buy the nonprofit that controls OpenAI, declaring willingness to match or exceed any better offer. The offer was rejected on 14 February 2025, with OpenAI stating that it was not for sale, but the offer complicated Altman's restructuring plan by suggesting a lower bar for how much the nonprofit should be valued. OpenAI, Inc. was originally designed as a nonprofit in order to ensure that AGI "benefits all of humanity" rather than "the private gain of any person". In 2019, it created OpenAI Global, LLC, a capped-profit subsidiary controlled by the nonprofit. In December 2024, OpenAI proposed a restructuring plan to convert the capped-profit into a Delaware-based public benefit corporation (PBC), and to release it from the control of the nonprofit. The nonprofit would sell its control and other assets, getting equity in return, and would use it to fund and pursue separate charitable projects, including in science and education. OpenAI's leadership described the change as necessary to secure additional investments, and claimed that the nonprofit's founding mission to ensure AGI "benefits all of humanity" would be better fulfilled. The plan has been criticized by former employees. A legal letter named "Not For Private Gain" asked the attorneys general of California and Delaware to intervene, stating that the restructuring is illegal and would remove governance safeguards from the nonprofit and the attorneys general. The letter argues that OpenAI's complex structure was deliberately designed to remain accountable to its mission, without the conflicting pressure of maximizing profits. It contends that the nonprofit is best positioned to advance its mission of ensuring AGI benefits all of humanity by continuing to control OpenAI Global, LLC, whatever the amount of equity that it could get in exchange. PBCs can choose how they balance their mission with profit-making. Controlling shareholders have a large influence on how closely a PBC sticks to its mission. On October 28, 2025, OpenAI announced that it had adopted the new PBC corporate structure after receiving approval from the attorneys general of California and Delaware. Under the new structure, OpenAI's for-profit branch became a public benefit corporation known as OpenAI Group PBC, while the non-profit was renamed to the OpenAI Foundation. The OpenAI Foundation holds a 26% stake in the PBC, while Microsoft holds a 27% stake and the remaining 47% is owned by employees and other investors. All members of the OpenAI Group PBC board of directors will be appointed by the OpenAI Foundation, which can remove them at any time. Members of the Foundation's board will also serve on the for-profit board. The new structure allows the for-profit PBC to raise investor funds like most traditional tech companies, including through an initial public offering, which Altman claimed was the most likely path forward. In January 2023, OpenAI Global, LLC was in talks for funding that would value the company at $29 billion, double its 2021 value. On January 23, 2023, Microsoft announced a new US$10 billion investment in OpenAI Global, LLC over multiple years, partially needed to use Microsoft's cloud-computing service Azure. From September to December, 2023, Microsoft rebranded all variants of its Copilot to Microsoft Copilot, and they added MS-Copilot to many installations of Windows and released Microsoft Copilot mobile apps. Following OpenAI's 2025 restructuring, Microsoft owns a 27% stake in the for-profit OpenAI Group PBC, valued at $135 billion. In a deal announced the same day, OpenAI agreed to purchase $250 billion of Azure services, with Microsoft ceding their right of first refusal over OpenAI's future cloud computing purchases. As part of the deal, OpenAI will continue to share 20% of its revenue with Microsoft until it achieves AGI, which must now be verified by an independent panel of experts. The deal also loosened restrictions on both companies working with third parties, allowing Microsoft to pursue AGI independently and allowing OpenAI to develop products with other companies. In 2017, OpenAI spent $7.9 million, a quarter of its functional expenses, on cloud computing alone. In comparison, DeepMind's total expenses in 2017 were $442 million. In the summer of 2018, training OpenAI's Dota 2 bots required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks. In October 2024, OpenAI completed a $6.6 billion capital raise with a $157 billion valuation including investments from Microsoft, Nvidia, and SoftBank. On January 21, 2025, Donald Trump announced The Stargate Project, a joint venture between OpenAI, Oracle, SoftBank and MGX to build an AI infrastructure system in conjunction with the US government. The project takes its name from OpenAI's existing "Stargate" supercomputer project and is estimated to cost $500 billion. The partners planned to fund the project over the next four years. In July, the United States Department of Defense announced that OpenAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and xAI. In the same month, the company made a deal with the UK Government to use ChatGPT and other AI tools in public services. OpenAI subsequently began a $50 million fund to support nonprofit and community organizations. In April 2025, OpenAI raised $40 billion at a $300 billion post-money valuation, which was the highest-value private technology deal in history. The financing round was led by SoftBank, with other participants including Microsoft, Coatue, Altimeter and Thrive. In July 2025, the company reported annualized revenue of $12 billion. This was an increase from $3.7 billion in 2024, which was driven by ChatGPT subscriptions, which reached 20 million paid subscribers by April 2025, up from 15.5 million at the end of 2024, alongside a rapidly expanding enterprise customer base that grew to five million business users. The company’s cash burn remains high because of the intensive computational costs required to train and operate large language models. It projects an $8 billion operating loss in 2025. OpenAI reports revised long-term spending projections totaling approximately $115 billion through 2029, with annual expenditures projected to escalate significantly, reaching $17 billion in 2026, $35 billion in 2027, and $45 billion in 2028. These expenditures are primarily allocated toward expanding compute infrastructure, developing proprietary AI chips, constructing data centers, and funding intensive model training programs, with more than half of the spending through the end of the decade expected to support research-intensive compute for model training and development. The company's financial strategy prioritizes market expansion and technological advancement over near-term profitability, with OpenAI targeting cash-flow-positive operations by 2029 and projecting revenue of approximately $200 billion by 2030. This aggressive spending trajectory underscores both the enormous capital requirements of scaling cutting-edge AI technology and OpenAI's commitment to maintaining its position as a leader in the artificial intelligence industry. In October 2025, OpenAI completed an employee share sale of up to $10 billion to existing investors which valued the company at $500 billion. The deal values OpenAI as the most valuable privately owned company in the world—surpassing SpaceX as the world's most valuable private company. On November 17, 2023, Sam Altman was removed as CEO when its board of directors (composed of Helen Toner, Ilya Sutskever, Adam D'Angelo and Tasha McCauley) cited a lack of confidence in him. Chief Technology Officer Mira Murati took over as interim CEO. Greg Brockman, the president of OpenAI, was also removed as chairman of the board and resigned from the company's presidency shortly thereafter. Three senior OpenAI researchers subsequently resigned: director of research and GPT-4 lead Jakub Pachocki, head of AI risk Aleksander Mądry, and researcher Szymon Sidor. On November 18, 2023, there were reportedly talks of Altman returning as CEO amid pressure placed upon the board by investors such as Microsoft and Thrive Capital, who objected to Altman's departure. Although Altman himself spoke in favor of returning to OpenAI, he has since stated that he considered starting a new company and bringing former OpenAI employees with him if talks to reinstate him didn't work out. The board members agreed "in principle" to resign if Altman returned. On November 19, 2023, negotiations with Altman to return failed and Murati was replaced by Emmett Shear as interim CEO. The board initially contacted Anthropic CEO Dario Amodei (a former OpenAI executive) about replacing Altman, and proposed a merger of the two companies, but both offers were declined. On November 20, 2023, Microsoft CEO Satya Nadella announced Altman and Brockman would be joining Microsoft to lead a new advanced AI research team, but added that they were still committed to OpenAI despite recent events. Before the partnership with Microsoft was finalized, Altman gave the board another opportunity to negotiate with him. About 738 of OpenAI's 770 employees, including Murati and Sutskever, signed an open letter stating they would quit their jobs and join Microsoft if the board did not rehire Altman and then resign. This prompted OpenAI investors to consider legal action against the board as well. In response, OpenAI management sent an internal memo to employees stating that negotiations with Altman and the board had resumed and would take some time. On November 21, 2023, after continued negotiations, Altman and Brockman returned to the company in their prior roles along with a reconstructed board made up of new members Bret Taylor (as chairman) and Lawrence Summers, with D'Angelo remaining. According to subsequent reporting, shortly before Altman’s firing, some employees raised concerns to the board about how he had handled the safety implications of a recent internal AI capability discovery. On November 29, 2023, OpenAI announced that an anonymous Microsoft employee had joined the board as a non-voting member to observe the company's operations; Microsoft resigned from the board in July 2024. In February 2024, the Securities and Exchange Commission subpoenaed OpenAI's internal communication to determine if Altman's alleged lack of candor misled investors. In 2024, following the temporary removal of Sam Altman and his return, many employees gradually left OpenAI, including most of the original leadership team and a significant number of AI safety researchers. In August 2023, it was announced that OpenAI had acquired the New York-based start-up Global Illumination, a company that deploys AI to develop digital infrastructure and creative tools. In June 2024, OpenAI acquired Multi, a startup focused on remote collaboration. In March 2025, OpenAI reached a deal with CoreWeave to acquire $350 million worth of CoreWeave shares and access to AI infrastructure, in return for $11.9 billion paid over five years. Microsoft was already CoreWeave's biggest customer in 2024. Alongside their other business dealings, OpenAI and Microsoft were renegotiating the terms of their partnership to facilitate a potential future initial public offering by OpenAI, while ensuring Microsoft's continued access to advanced AI models. On May 21, OpenAI announced the $6.5 billion acquisition of io, an AI hardware start-up founded by former Apple designer Jony Ive in 2024. In September 2025, OpenAI agreed to acquire the product testing startup Statsig for $1.1 billion in an all-stock deal and appointed Statsig's founding CEO Vijaye Raji as OpenAI's chief technology officer of applications. The company also announced development of an AI-driven hiring service designed to rival LinkedIn. OpenAI acquired personal finance app Roi in October 2025. In October 2025, OpenAI acquired Software Applications Incorporated, the developer of Sky, a macOS-based natural language interface designed to operate across desktop applications. The Sky team joined OpenAI, and the company announced plans to integrate Sky’s capabilities into ChatGPT. In December 2025, it was announced OpenAI had agreed to acquire Neptune, an AI tooling startup that helps companies track and manage model training, for an undisclosed amount. In January 2026, it was announced OpenAI had acquired healthcare technology startup Torch for approximately $60 million. The acquisition followed the launch of OpenAI’s ChatGPT Health product and was intended to strengthen the company’s medical data and healthcare artificial intelligence capabilities. OpenAI has been criticized for outsourcing the annotation of data sets to Sama, a company based in San Francisco that employed workers in Kenya. These annotations were used to train an AI model to detect toxicity, which could then be used to moderate toxic content, notably from ChatGPT's training data and outputs. However, these pieces of text usually contained detailed descriptions of various types of violence, including sexual violence. The investigation uncovered that OpenAI began sending snippets of data to Sama as early as November 2021. The four Sama employees interviewed by Time described themselves as mentally scarred. OpenAI paid Sama $12.50 per hour of work, and Sama was redistributing the equivalent of between $1.32 and $2.00 per hour post-tax to its annotators. Sama's spokesperson said that the $12.50 was also covering other implicit costs, among which were infrastructure expenses, quality assurance and management. In 2024, OpenAI began collaborating with Broadcom to design a custom AI chip capable of both training and inference, targeted for mass production in 2026 and to be manufactured by TSMC on a 3 nm process node. This initiative intended to reduce OpenAI's dependence on Nvidia GPUs, which are costly and face high demand in the market. In January 2024, Arizona State University purchased ChatGPT Enterprise in OpenAI's first deal with a university. In June 2024, Apple Inc. signed a contract with OpenAI to integrate ChatGPT features into its products as part of its new Apple Intelligence initiative. In June 2025, OpenAI began renting Google Cloud's Tensor Processing Units (TPUs) to support ChatGPT and related services, marking its first meaningful use of non‑Nvidia AI chips. In September 2025, it was revealed that OpenAI signed a contract with Oracle to purchase $300 billion in computing power over the next five years. In September 2025, OpenAI and NVIDIA announced a memorandum of understanding that included a potential deployment of at least 10 gigawatts of NVIDIA systems and a $100 billion investment from NVIDIA in OpenAI. OpenAI expected the negotiations to be completed within weeks. As of January 2026, this has not been realized, and the two sides are rethinking the future of their partnership. In October 2025, OpenAI announced a multi-billion dollar deal with AMD. OpenAI committed to purchasing six gigawatts worth of AMD chips, starting with the MI450. OpenAI will have the option to buy up to 160 million shares of AMD, about 10% of the company, depending on development, performance and share price targets. In December 2025, Disney said it would make a $1 billion investment in OpenAI, and signed a three-year licensing deal that will let users generate videos using Sora—OpenAI's short-form AI video platform. More than 200 Disney, Marvel, Star Wars and Pixar characters will be available to OpenAI users. In early 2026, Amazon entered advanced discussions to invest up to $50 billion in OpenAI as part of a potential artificial intelligence partnership. Under the proposed agreement, OpenAI’s models could be integrated into Amazon’s digital assistant Alexa and other internal projects. OpenAI provides LLMs to the Artificial Intelligence Cyber Challenge and to the Advanced Research Projects Agency for Health. In October 2024, The Intercept revealed that OpenAI's tools are considered "essential" for AFRICOM's mission and included in an "Exception to Fair Opportunity" contractual agreement between the United States Department of Defense and Microsoft. In December 2024, OpenAI said it would partner with defense-tech company Anduril to build drone defense technologies for the United States and its allies. In 2025, OpenAI's Chief Product Officer, Kevin Weil, was commissioned lieutenant colonel in the U.S. Army to join Detachment 201 as senior advisor. In June 2025, the U.S. Department of Defense awarded OpenAI a $200 million one-year contract to develop AI tools for military and national security applications. OpenAI announced a new program, OpenAI for Government, to give federal, state, and local governments access to its models, including ChatGPT. Services In February 2019, GPT-2 was announced, which gained attention for its ability to generate human-like text. In 2020, OpenAI announced GPT-3, a language model trained on large internet datasets. GPT-3 is aimed at natural language answering questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named the API, would form the heart of its first commercial product. Eleven employees left OpenAI, mostly between December 2020 and January 2021, in order to establish Anthropic. In 2021, OpenAI introduced DALL-E, a specialized deep learning model adept at generating complex digital images from textual descriptions, utilizing a variant of the GPT-3 architecture. In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days. According to anonymous sources cited by Reuters in December 2022, OpenAI Global, LLC was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024. After ChatGPT was launched, Google announced a similar chatbot, Bard, amid internal concerns that ChatGPT could threaten Google’s position as a primary source of online information. On February 7, 2023, Microsoft announced that it was building AI technology based on the same foundation as ChatGPT into Microsoft Bing, Edge, Microsoft 365 and other products. On March 14, 2023, OpenAI released GPT-4, both as an API (with a waitlist) and as a feature of ChatGPT Plus. On November 6, 2023, OpenAI launched GPTs, allowing individuals to create customized versions of ChatGPT for specific purposes, further expanding the possibilities of AI applications across various industries. On November 14, 2023, OpenAI announced they temporarily suspended new sign-ups for ChatGPT Plus due to high demand. Access for newer subscribers re-opened a month later on December 13. In December 2024, the company launched the Sora model. It also launched OpenAI o1, an early reasoning model that was internally codenamed strawberry. Additionally, ChatGPT Pro—a $200/month subscription service offering unlimited o1 access and enhanced voice features—was introduced, and preliminary benchmark results for the upcoming OpenAI o3 models were shared. On January 23, 2025, OpenAI released Operator, an AI agent and web automation tool for accessing websites to execute goals defined by users. The feature was only available to Pro users in the United States. OpenAI released deep research agent, nine days later. It scored a 27% accuracy on the benchmark Humanity's Last Exam (HLE). Altman later stated GPT-4.5 would be the last model without full chain-of-thought reasoning. In July 2025, reports indicated that AI models by both OpenAI and Google DeepMind solved mathematics problems at the level of top-performing students in the International Mathematical Olympiad. OpenAI's large language model was able to achieve gold medal-level performance, reflecting significant progress in AI's reasoning abilities. On October 6, 2025, OpenAI unveiled its Agent Builder platform during the company's DevDay event. The platform includes a visual drag-and-drop interface that lets developers and businesses design, test, and deploy agentic workflows with limited coding. On October 21, 2025, OpenAI introduced ChatGPT Atlas, a browser integrating the ChatGPT assistant directly into web navigation, to compete with existing browsers such as Google Chrome and Apple Safari. On December 11, 2025, OpenAI announced GPT-5.2. This model will be better at creating spreadsheets, building presentations, perceiving images, writing code and understanding long context. On January 27, 2026, OpenAI introduced Prism, a LaTeX-native workspace meant to assist scientists to help with research and writing. The platform utilizes GPT-5.2 as a backend to automate the process of drafting for scientific papers, including features for managing citations, complex equation formatting, and real-time collaborative editing. In March 2023, the company was criticized for disclosing particularly few technical details about products like GPT-4, contradicting its initial commitment to openness and making it harder for independent researchers to replicate its work and develop safeguards. OpenAI cited competitiveness and safety concerns to justify this repudiation. OpenAI's former chief scientist Ilya Sutskever argued in 2023 that open-sourcing increasingly capable models was increasingly risky, and that the safety reasons for not open-sourcing the most potent AI models would become "obvious" in a few years. In September 2025, OpenAI published a study on how people use ChatGPT for everyday tasks. The study found that "non-work tasks" (according to an LLM-based classifier) account for more than 72 percent of all ChatGPT usage, with a minority of overall usage related to business productivity. In July 2023, OpenAI launched the superalignment project, aiming within four years to determine how to align future superintelligent systems. OpenAI promised to dedicate 20% of its computing resources to the project, although the team denied receiving anything close to 20%. OpenAI ended the project in May 2024 after its co-leaders Ilya Sutskever and Jan Leike left the company. In August 2025, OpenAI was criticized after thousands of private ChatGPT conversations were inadvertently exposed to public search engines like Google due to an experimental "share with search engines" feature. The opt-in toggle, intended to allow users to make specific chats discoverable, resulted in some discussions including personal details such as names, locations, and intimate topics appearing in search results when users accidentally enabled it while sharing links. OpenAI announced the feature's permanent removal on August 1, 2025, and the company began coordinating with search providers to remove the exposed content, emphasizing that it was not a security breach but a design flaw that heightened privacy risks. CEO Sam Altman acknowledged the issue in a podcast, noting users often treat ChatGPT as a confidant for deeply personal matters, which amplified concerns about AI handling sensitive data. Management In 2018, Musk resigned from his Board of Directors seat, citing "a potential future conflict [of interest]" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars. OpenAI stated that Musk's financial contributions were below $45 million. On March 3, 2023, Reid Hoffman resigned from his board seat, citing a desire to avoid conflicts of interest with his investments in AI companies via Greylock Partners, and his co-founding of the AI startup Inflection AI. Hoffman remained on the board of Microsoft, a major investor in OpenAI. In May 2024, Chief Scientist Ilya Sutskever resigned and was succeeded by Jakub Pachocki. Co-leader Jan Leike also departed amid concerns over safety and trust. OpenAI then signed deals with Reddit, News Corp, Axios, and Vox Media. Paul Nakasone then joined the board of OpenAI. In August 2024, cofounder John Schulman left OpenAI to join Anthropic, and OpenAI's president Greg Brockman took extended leave until November. In September 2024, CTO Mira Murati left the company. In November 2025, Lawrence Summers resigned from the board of directors. Governance and legal issues In May 2023, Sam Altman, Greg Brockman and Ilya Sutskever posted recommendations for the governance of superintelligence. They stated that superintelligence could happen within the next 10 years, allowing a "dramatically more prosperous future" and that "given the possibility of existential risk, we can't just be reactive". They proposed creating an international watchdog organization similar to IAEA to oversee AI systems above a certain capability threshold, suggesting that relatively weak AI systems on the other side should not be overly regulated. They also called for more technical safety research for superintelligences, and asked for more coordination, for example through governments launching a joint project which "many current efforts become part of". In July 2023, the FTC issued a civil investigative demand to OpenAI to investigate whether the company's data security and privacy practices to develop ChatGPT were unfair or harmed consumers (including by reputational harm) in violation of Section 5 of the Federal Trade Commission Act of 1914. These are typically preliminary investigative matters and are nonpublic, but the FTC's document was leaked. In July 2023, the FTC launched an investigation into OpenAI over allegations that the company scraped public data and published false and defamatory information. They asked OpenAI for comprehensive information about its technology and privacy safeguards, as well as any steps taken to prevent the recurrence of situations in which its chatbot generated false and derogatory content about people. The agency also raised concerns about ‘circular’ spending arrangements—for example, Microsoft extending Azure credits to OpenAI while both companies shared engineering talent—and warned that such structures could negatively affect the public. In September 2024, OpenAI's global affairs chief endorsed the UK's "smart" AI regulation during testimony to a House of Lords committee. In February 2025, OpenAI CEO Sam Altman stated that the company is interested in collaborating with the People's Republic of China, despite regulatory restrictions imposed by the U.S. government. This shift comes in response to the growing influence of the Chinese artificial intelligence company DeepSeek, which has disrupted the AI market with open models, including DeepSeek V3 and DeepSeek R1. Following DeepSeek's market emergence, OpenAI enhanced security protocols to protect proprietary development techniques from industrial espionage. Some industry observers noted similarities between DeepSeek's model distillation approach and OpenAI's methodology, though no formal intellectual property claim was filed. According to Oliver Roberts, in March 2025, the United States had 781 state AI bills or laws. OpenAI advocated for preempting state AI laws with federal laws. According to Scott Kohler, OpenAI has opposed California's AI legislation and suggested that the state bill encroaches on a more competent federal government. Public Citizen opposed a federal preemption on AI and pointed to OpenAI's growth and valuation as evidence that existing state laws have not hampered innovation. Before May 2024, OpenAI required departing employees to sign a lifelong non-disparagement agreement forbidding them from criticizing OpenAI and acknowledging the existence of the agreement. Daniel Kokotajlo, a former employee, publicly stated that he forfeited his vested equity in OpenAI in order to leave without signing the agreement. Sam Altman stated that he was unaware of the equity cancellation provision, and that OpenAI never enforced it to cancel any employee's vested equity. However, leaked documents and emails refute this claim. On May 23, 2024, OpenAI sent a memo releasing former employees from the agreement. OpenAI was sued for copyright infringement by authors Sarah Silverman, Matthew Butterick, Paul Tremblay and Mona Awad in July 2023. In September 2023, 17 authors, including George R. R. Martin, John Grisham, Jodi Picoult and Jonathan Franzen, joined the Authors Guild in filing a class action lawsuit against OpenAI, alleging that the company's technology was illegally using their copyrighted work. The New York Times also sued the company in late December 2023. In May 2024 it was revealed that OpenAI had destroyed its Books1 and Books2 training datasets, which were used in the training of GPT-3, and which the Authors Guild believed to have contained over 100,000 copyrighted books. In 2021, OpenAI developed a speech recognition tool called Whisper. OpenAI used it to transcribe more than one million hours of YouTube videos into text for training GPT-4. The automated transcription of YouTube videos raised concerns within OpenAI employees regarding potential violations of YouTube's terms of service, which prohibit the use of videos for applications independent of the platform, as well as any type of automated access to its videos. Despite these concerns, the project proceeded with notable involvement from OpenAI's president, Greg Brockman. The resulting dataset proved instrumental in training GPT-4. In February 2024, The Intercept as well as Raw Story and Alternate Media Inc. filed lawsuit against OpenAI on copyright litigation ground. The lawsuit is said to have charted a new legal strategy for digital-only publishers to sue OpenAI. On April 30, 2024, eight newspapers filed a lawsuit in the Southern District of New York against OpenAI and Microsoft, claiming illegal harvesting of their copyrighted articles. The suing publications included The Mercury News, The Denver Post, The Orange County Register, St. Paul Pioneer Press, Chicago Tribune, Orlando Sentinel, Sun Sentinel, and New York Daily News. In June 2023, a lawsuit claimed that OpenAI scraped 300 billion words online without consent and without registering as a data broker. It was filed in San Francisco, California, by sixteen anonymous plaintiffs. They also claimed that OpenAI and its partner as well as customer Microsoft continued to unlawfully collect and use personal data from millions of consumers worldwide to train artificial intelligence models. On May 22, 2024, OpenAI entered into an agreement with News Corp to integrate news content from The Wall Street Journal, the New York Post, The Times, and The Sunday Times into its AI platform. Meanwhile, other publications like The New York Times chose to sue OpenAI and Microsoft for copyright infringement over the use of their content to train AI models. In November 2024, a coalition of Canadian news outlets, including the Toronto Star, Metroland Media, Postmedia, The Globe and Mail, The Canadian Press and CBC, sued OpenAI for using their news articles to train its software without permission. In October 2024 during a New York Times interview, Suchir Balaji accused OpenAI of violating copyright law in developing its commercial LLMs which he had helped engineer. He was a likely witness in a major copyright trial against the AI company, and was one of several of its current or former employees named in court filings as potentially having documents relevant to the case. On November 26, 2024, Balaji died by suicide. His death prompted the circulation of conspiracy theories alleging that he had been deliberately silenced. California Congressman Ro Khanna endorsed calls for an investigation. On April 24, 2025, Ziff Davis sued OpenAI in Delaware federal court for copyright infringement. Ziff Davis is known for publications such as ZDNet, PCMag, CNET, IGN and Lifehacker. In April 2023, the EU's European Data Protection Board (EDPB) formed a dedicated task force on ChatGPT "to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities" based on the "enforcement action undertaken by the Italian data protection authority against OpenAI about the ChatGPT service". In late April 2024 NOYB filed a complaint with the Austrian Datenschutzbehörde against OpenAI for violating the European General Data Protection Regulation. A text created with ChatGPT gave a false date of birth for a living person without giving the individual the option to see the personal data used in the process. A request to correct the mistake was denied. Additionally, neither the recipients of ChatGPT's work nor the sources used, could be made available, OpenAI claimed. OpenAI was criticized for lifting its ban on using ChatGPT for "military and warfare". Up until January 10, 2024, its "usage policies" included a ban on "activity that has high risk of physical harm, including", specifically, "weapons development" and "military and warfare". Its new policies prohibit "[using] our service to harm yourself or others" and to "develop or use weapons". In August 2025, the parents of a 16-year-old boy who died by suicide filed a wrongful death lawsuit against OpenAI (and CEO Sam Altman), alleging that months of conversations with ChatGPT about mental health and methods of self-harm contributed to their son's death and that safeguards were inadequate for minors. OpenAI expressed condolences and said it was strengthening protections (including updated crisis response behavior and parental controls). Coverage described it as a first-of-its-kind wrongful death case targeting the company's chatbot. The complaint was filed in California state court in San Francisco. In November 2025, the Social Media Victims Law Center and Tech Justice Law Project filed seven lawsuits against OpenAI, of which four lawsuits alleged wrongful death. The suits were filed on behalf of Zane Shamblin, 23, of Texas; Amaurie Lacey, 17, of Georgia; Joshua Enneking, 26, of Florida; and Joe Ceccanti, 48, of Oregon, who each committed suicide after prolonged ChatGPT usage. In December 2025, Stein-Erik Soelberg, who was 56 years old at the time, allegedly murdered his mother Suzanne Adams. In the months prior the paranoid, delusional man often discussed his ideas with ChatGPT. Adam's estate then sued OpenAI claiming that the company shared responsibility due to the risk of chatbot psychosis despite the fact that chatbot psychosis is not a real medical diagnosis. OpenAI responded saying they will make ChatGPT safer for users disconnected from reality. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Computer#cite_note-134] | [TOKENS: 10628]
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y − z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languages—some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Great_Recession] | [TOKENS: 10597]
Contents Great Recession The Great Recession was a period of market decline in economies around the world that occurred from late 2007 to mid-2009, overlapping with the closely related 2008 financial crisis. The scale and timing of the recession varied from country to country (see map). At the time, the International Monetary Fund (IMF) concluded that it was the most severe economic and financial meltdown since the Great Depression. The Great Recession was caused by many weaknesses that slowly developed in the financial system, along with a series of triggering events that began with the bursting of the United States housing bubble in 2005–2012. When housing prices fell and homeowners began to abandon their mortgages, the value of mortgage-backed securities held by investment banks declined in 2007–2008, causing several to collapse or be bailed out in September 2008. This 2007–2008 phase was called the subprime mortgage crisis. The combination of banks being unable to provide funds to businesses and homeowners paying down debt rather than borrowing and spending resulted in the Great Recession. The recession officially began in the U.S. in December 2007 and lasted until June 2009, thus extending over 19 months. As with most other recessions, it appears that no known formal theoretical or empirical model was able to accurately predict the advance of this recession, except for minor signals in the sudden rise of forecast probabilities, which were still well under 50%. The recession was not felt equally around the world; whereas most of the world's developed economies, particularly in North America, South America and Europe, fell into a severe, sustained recession, many more recently developing economies suffered far less impact, particularly China, India and Indonesia, whose economies grew substantially during this period. Similarly, Oceania suffered minimal impact, in part due to its proximity to Asian markets. Terminology Two definitions of the term "economic recession" exist: one sense referring generally to "a period of reduced economic activity" and ongoing hardship; and a technical definition used in economics, which is defined operationally, specifically the contraction phase of a business cycle with two or more consecutive quarters of GDP contraction (negative GDP growth rate). The latter is typically used to influence abrupt changes in monetary policy. Under the technical definition, the recession ended in the United States in June or July 2009. Journalist Robert Kuttner has argued that 'The Great Recession' is a misnomer. According to Kuttner, "recessions are mild dips in the business cycle that are either self-correcting or soon cured by modest fiscal or monetary stimulus. Because of the continuing deflationary trap, it would be more accurate to call this decade's stagnant economy The Lesser Depression or The Great Deflation." Overview The Great Recession met the IMF criteria for being a global recession only in the single calendar year 2009. That IMF definition requires a decline in annual real world GDP per capita. Despite the fact that quarterly data are being used as recession definition criteria by all G20 members, representing 85% of the world GDP, the International Monetary Fund (IMF) has decided – in the absence of a complete data set – not to declare/measure global recessions according to quarterly GDP data. The seasonally adjusted PPP‑weighted real GDP for the G20‑zone, however, is a good indicator for the world GDP, and it was measured to have suffered a direct quarter on quarter decline during the three quarters from Q3‑2008 until Q1‑2009, which more accurately mark when the recession took place at the global level. According to the U.S. National Bureau of Economic Research (the official arbiter of U.S. recessions), the recession began in December 2007 and ended in June 2009, and thus extended over eighteen months. The years leading up to the crisis were characterized by an exorbitant rise in asset prices and associated boom in economic demand. Further, the U.S. shadow banking system (i.e., non-depository financial institutions such as investment banks) had grown to rival the depository system yet was not subject to the same regulatory oversight, making it vulnerable to a bank run. U.S. mortgage-backed securities, which had risks that were hard to assess, were marketed around the world, as they offered higher yields than U.S. government bonds. Many of these securities were backed by subprime mortgages, which collapsed in value when the U.S. housing bubble burst during 2006 and homeowners began to default on their mortgage payments in large numbers starting in 2007. The emergence of subprime loan losses in 2007 began the crisis and exposed other risky loans and over-inflated asset prices. With loan losses mounting and the fall of Lehman Brothers on September 15, 2008, a major panic broke out on the inter-bank loan market. There was the equivalent of a bank run on the shadow banking system, resulting in many large and well established investment banks and commercial banks in the United States and Europe suffering huge losses and even facing bankruptcy, resulting in massive public financial assistance (government bailouts). The global recession that followed resulted in a sharp drop in international trade, rising unemployment and slumping commodity prices. Several economists predicted that recovery might not appear until 2011 and that the recession would be the worst since the Great Depression of the 1930s. Economist Paul Krugman once commented on this as seemingly the beginning of "a second Great Depression". Governments and central banks responded with fiscal policy and monetary policy initiatives to stimulate national economies and reduce financial system risks. The recession renewed interest in Keynesian economic ideas on how to combat recessionary conditions. Economists advise that the stimulus measures such as quantitative easing (pumping money into the system) and holding down central bank wholesale lending interest rates should be withdrawn as soon as economies recover enough to "chart a path to sustainable growth". The distribution of household incomes in the United States became more unequal during the post-2008 economic recovery. Income inequality in the United States grew from 2005 to 2012 in more than two thirds of metropolitan areas. Median household wealth fell 35% in the U.S., from $106,591 to $68,839 between 2005 and 2011. Causes The U.S. Financial Crisis Inquiry Commission, composed of six Democratic and four Republican appointees, reported its majority findings in January 2011. It concluded that "the crisis was avoidable and was caused by: There were two Republican dissenting FCIC reports. One of them, signed by three Republican appointees, concluded that there were multiple causes. In his separate dissent to the majority and minority opinions of the FCIC, Commissioner Peter J. Wallison of the American Enterprise Institute (AEI) primarily blamed U.S. housing policy, including the actions of Fannie and Freddie, for the crisis. He wrote: "When the bubble began to deflate in mid-2007, the low quality and high risk loans engendered by government policies failed in unprecedented numbers." In its "Declaration of the Summit on Financial Markets and the World Economy," dated November 15, 2008, leaders of the Group of 20 cited the following causes: During a period of strong global growth, growing capital flows, and prolonged stability earlier this decade, market participants sought higher yields without an adequate appreciation of the risks and failed to exercise proper due diligence. At the same time, weak underwriting standards, unsound risk management practices, increasingly complex and opaque financial products, and consequent excessive leverage combined to create vulnerabilities in the system. Policy-makers, regulators and supervisors, in some advanced countries, did not adequately appreciate and address the risks building up in financial markets, keep pace with financial innovation, or take into account the systemic ramifications of domestic regulatory actions. Federal Reserve Chair Ben Bernanke testified in September 2010 before the FCIC regarding the causes of the crisis. He wrote that there were shocks or triggers (i.e., particular events that touched off the crisis) and vulnerabilities (i.e., structural weaknesses in the financial system, regulation and supervision) that amplified the shocks. Examples of triggers included: losses on subprime mortgage securities that began in 2007 and a run on the shadow banking system that began in the middle of 2007, which adversely affected the functioning of money markets. Examples of vulnerabilities in the private sector included: financial institution dependence on unstable sources of short-term funding such as repurchase agreements or Repos; deficiencies in corporate risk management; excessive use of leverage (borrowing to invest); and inappropriate usage of derivatives as a tool for taking excessive risks. Examples of vulnerabilities in the public sector included: statutory gaps and conflicts between regulators; ineffective use of regulatory authority; and ineffective crisis management capabilities. Bernanke also discussed "Too big to fail" institutions, monetary policy, and trade deficits. There are several "narratives" attempting to place the causes of the recession into context, with overlapping elements. Five such narratives include: Underlying narratives #1–3 is a hypothesis that growing income inequality and wage stagnation encouraged families to increase their household debt to maintain their desired living standard, fueling the bubble. Further, this greater share of income flowing to the top increased the political power of business interests, who used that power to deregulate or limit regulation of the shadow banking system. Narrative #5 challenges the popular claim (narrative #4) that subprime borrowers with shoddy credit caused the crisis by buying homes they couldn't afford. This narrative is supported by new research showing that the biggest growth of mortgage debt during the U.S. housing boom came from those with good credit scores in the middle and top of the credit score distribution – and that these borrowers accounted for a disproportionate share of defaults. The Economist wrote in July 2012 that the inflow of investment dollars required to fund the U.S. trade deficit was a major cause of the housing bubble and financial crisis: "The trade deficit, less than 1% of GDP in the early 1990s, hit 6% in 2006. That deficit was financed by inflows of foreign savings, in particular from East Asia and the Middle East. Much of that money went into dodgy mortgages to buy overvalued houses, and the financial crisis was the result." In May 2008, NPR explained in their Peabody Award winning program "The Giant Pool of Money" that a vast inflow of savings from developing nations flowed into the mortgage market, driving the U.S. housing bubble. This pool of fixed income savings increased from around $35 trillion in 2000 to about $70 trillion by 2008. NPR explained this money came from various sources, "[b]ut the main headline is that all sorts of poor countries became kind of rich, making things like TVs and selling us oil. China, India, Abu Dhabi, Saudi Arabia made a lot of money and banked it." Describing the crisis in Europe, Paul Krugman wrote in February 2012 that: "What we're basically looking at, then, is a balance of payments problem, in which capital flooded south after the creation of the euro, leading to overvaluation in southern Europe." Another narrative about the origin has been focused on the respective parts played by public monetary policy (notably in the US) and by the practices of private financial institutions. In the U.S., mortgage funding was unusually decentralised, opaque, and competitive, and it is believed that competition between lenders for revenue and market share contributed to declining underwriting standards and risky lending. While Alan Greenspan's role as Chairman of the Federal Reserve has been widely discussed, the main point of controversy remains the lowering of the Federal funds rate to 1% for more than a year, which, according to Austrian theorists, injected huge amounts of "easy" credit-based money into the financial system and created an unsustainable economic boom. There is an argument that Greenspan's actions in the years 2002–2004 were actually motivated by the need to take the U.S. economy out of the early 2000s recession caused by the bursting of the dot-com bubble: although by doing so he did not avert the crisis, but only postponed it. Another narrative focuses on high levels of private debt in the US economy. USA household debt as a percentage of annual disposable personal income was 127% at the end of 2007, versus 77% in 1990. Faced with increasing mortgage payments as their adjustable rate mortgage payments increased, households began to default in record numbers, rendering mortgage-backed securities worthless. High private debt levels also impact growth by making recessions deeper and the following recovery weaker. Robert Reich claims the amount of debt in the US economy can be traced to economic inequality, assuming that middle-class wages remained stagnant while wealth concentrated at the top, and households "pull equity from their homes and overload on debt to maintain living standards". The IMF reported in April 2012: "Household debt soared in the years leading up to the downturn. In advanced economies, during the five years preceding 2007, the ratio of household debt to income rose by an average of 39 percentage points, to 138 percent. In Denmark, Iceland, Ireland, the Netherlands, and Norway, debt peaked at more than 200 percent of household income. A surge in household debt to historic highs also occurred in emerging economies such as Estonia, Hungary, Latvia, and Lithuania. The concurrent boom in both house prices and the stock market meant that household debt relative to assets held broadly stable, which masked households' growing exposure to a sharp fall in asset prices. When house prices declined, leading to the 2008 financial crisis, many households saw their wealth shrink relative to their debt, and, with less income and more unemployment, found it harder to meet mortgage payments. By the end of 2011, real house prices had fallen from their peak by about 41% in Ireland, 29% in Iceland, 23% in Spain and the United States, and 21% in Denmark. Household defaults, underwater mortgages (where the loan balance exceeds the house value), foreclosures, and fire sales are now endemic to a number of economies. Household deleveraging by paying off debts or defaulting on them has begun in some countries. It has been most pronounced in the United States, where about two-thirds of the debt reduction reflects defaults." The onset of the economic crisis took most people by surprise. A 2009 paper identifies twelve economists and commentators who, between 2000 and 2006, predicted a recession based on the collapse of the then-booming housing market in the United States: Dean Baker, Wynne Godley, Fred Harrison, Michael Hudson, Eric Janszen, Med Jones Steve Keen, Jakob Brøchner Madsen, Jens Kjaer Sørensen, Kurt Richebächer, Nouriel Roubini, Peter Schiff, and Robert Shiller. By 2007, real estate bubbles were still under way in many parts of the world, especially in the United States, France, the United Kingdom, Spain, the Netherlands, Australia, the United Arab Emirates, New Zealand, Ireland, Poland, South Africa, Greece, Bulgaria, Croatia, Norway, Singapore, South Korea, Sweden, Finland, Argentina, the Baltic states, India, Romania, Ukraine and China. U.S. Federal Reserve Chairman Alan Greenspan said in mid-2005 that "at a minimum, there's a little 'froth' [in the U.S. housing market]...it's hard not to see that there are a lot of local bubbles". The Economist, writing at the same time, went further, saying, "[T]he worldwide rise in house prices is the biggest bubble in history". Real estate bubbles are (by definition of the word "bubble") followed by a price decrease (also known as a housing price crash) that can result in many owners holding negative equity (a mortgage debt higher than the current value of the property). Several sources have noted the failure of the US government to supervise or even require transparency of the financial instruments known as derivatives. Derivatives such as credit default swaps (CDSs) were unregulated or barely regulated. Michael Lewis noted CDSs enabled speculators to stack bets on the same mortgage securities. This is analogous to allowing many persons to buy insurance on the same house. Speculators that bought CDS protection were betting significant mortgage security defaults would occur, while the sellers (such as AIG) bet they would not. An unlimited amount could be wagered on the same housing-related securities, provided buyers and sellers of the CDS could be found. When massive defaults occurred on underlying mortgage securities, companies like AIG that were selling CDS were unable to perform their side of the obligation and defaulted; U.S. taxpayers paid over $100 billion to global financial institutions to honor AIG obligations, generating considerable outrage. A 2008 investigative article in The Washington Post found leading government officials at the time (Federal Reserve Board Chairman Alan Greenspan, Treasury Secretary Robert Rubin, and SEC Chairman Arthur Levitt) vehemently opposed any regulation of derivatives. In 1998, Brooksley E. Born, head of the Commodity Futures Trading Commission, put forth a policy paper asking for feedback from regulators, lobbyists, and legislators on the question of whether derivatives should be reported, sold through a central facility, or whether capital requirements should be required of their buyers. Greenspan, Rubin, and Levitt pressured her to withdraw the paper and Greenspan persuaded Congress to pass a resolution preventing CFTC from regulating derivatives for another six months – when Born's term of office would expire. Ultimately, it was the collapse of a specific kind of derivative, the mortgage-backed security, that triggered the economic crisis of 2008. Paul Krugman wrote in 2009 that the run on the shadow banking system was the fundamental cause of the crisis. "As the shadow banking system expanded to rival or even surpass conventional banking in importance, politicians and government officials should have realised that they were re-creating the kind of financial vulnerability that made the Great Depression possible – and they should have responded by extending regulations and the financial safety net to cover these new institutions. Influential figures should have proclaimed a simple rule: anything that does what a bank does, anything that has to be rescued in crises the way banks are, should be regulated like a bank." He referred to this lack of controls as "malign neglect". During 2008, three of the largest U.S. investment banks either went bankrupt (Lehman Brothers) or were sold at fire sale prices to other banks (Bear Stearns and Merrill Lynch). The investment banks were not subject to the more stringent regulations applied to depository banks. These failures exacerbated the instability in the global financial system. The remaining two investment banks, Morgan Stanley and Goldman Sachs, potentially facing failure, opted to become commercial banks, thereby subjecting themselves to more stringent regulation but receiving access to credit via the Federal Reserve. Further, American International Group (AIG) had insured mortgage-backed and other securities but was not required to maintain sufficient reserves to pay its obligations when debtors defaulted on these securities. AIG was contractually required to post additional collateral with many creditors and counter-parties, touching off controversy when over $100 billion of U.S. taxpayer money was paid out to major global financial institutions on behalf of AIG. While this money was legally owed to the banks by AIG (under agreements made via credit default swaps purchased from AIG by the institutions), a number of Congressmen and media members expressed outrage that taxpayer money was used to bail out banks. Economist Gary Gorton wrote in May 2009 Unlike the historical banking panics of the 19th and early 20th centuries, the current banking panic is a wholesale panic, not a retail panic. In the earlier episodes, depositors ran to their banks and demanded cash in exchange for their checking accounts. Unable to meet those demands, the banking system became insolvent. The current panic involved financial firms "running" on other financial firms by not renewing sale and repurchase agreements (repo) or increasing the repo margin ("haircut"), forcing massive deleveraging, and resulting in the banking system being insolvent. The Financial Crisis Inquiry Commission reported in January 2011: In the early part of the 20th century, we erected a series of protections – the Federal Reserve as a lender of last resort, federal deposit insurance, ample regulations – to provide a bulwark against the panics that had regularly plagued America's banking system in the 19th century. Yet, over the past 30-plus years, we permitted the growth of a shadow banking system – opaque and laden with short term debt – that rivaled the size of the traditional banking system. Key components of the market – for example, the multitrillion-dollar repo lending market, off-balance-sheet entities, and the use of over-the-counter derivatives – were hidden from view, without the protections we had constructed to prevent financial meltdowns. We had a 21st-century financial system with 19th-century safeguards. The Gramm–Leach–Bliley Act (1999), which reduced the regulation of banks by allowing commercial and investment banks to merge, has also been blamed for the crisis, by Nobel Prize–winning economist Joseph Stiglitz among others. Peter Wallison and Edward Pinto of the American Enterprise Institute, which advocates for private enterprise and limited government, have asserted that private lenders were encouraged to relax lending standards by government affordable housing policies. They cite The Housing and Community Development Act of 1992, which initially required that 30 percent or more of Fannie's and Freddie's loan purchases be related to affordable housing. The legislation gave HUD the power to set future requirements. These rose to 42 percent in 1995 and 50 percent in 2000, and by 2008 a 56 percent minimum was established. However, the Financial Crisis Inquiry Commission (FCIC) Democratic majority report concluded that Fannie & Freddie "were not a primary cause" of the crisis and that CRA was not a factor in the crisis. Further, since housing bubbles appeared in multiple countries in Europe as well, the FCIC Republican minority dissenting report also concluded that U.S. housing policies were not a robust explanation for a wider global housing bubble. The hypothesis that a primary cause of the crisis was U.S. government housing policy requiring banks to make risky loans has been widely disputed, with Paul Krugman referring to it as "imaginary history". One of the other challenges with blaming government regulations for essentially forcing banks to make risky loans is the timing. Subprime lending increased from around 10% of mortgage origination historically to about 20% only from 2004 to 2006, with housing prices peaking in 2006. Blaming affordable housing regulations established in the 1990s for a sudden spike in subprime origination is problematic at best. A more proximate government action to the sudden rise in subprime lending was the SEC relaxing lending standards for the top investment banks during an April 2004 meeting with bank leaders. These banks increased their risk-taking shortly thereafter, significantly increasing their purchases and securitization of lower-quality mortgages, thus encouraging additional subprime and Alt-A lending by mortgage companies. This action by its investment bank competitors also resulted in Fannie Mae and Freddie Mac taking on more risk. The 2008 financial crisis and the Great Recession were described as a symptom of another, deeper crisis by a number of economists. For example, Ravi Batra argues that growing inequality of financial capitalism produces speculative bubbles that burst and result in depression and major political changes. Feminist economists Ailsa McKay and Margunn Bjørnholt argue that the 2008 financial crisis and the response to it revealed a crisis of ideas in mainstream economics and within the economics profession, and call for a reshaping of both the economy, economic theory and the economics profession. They argue that such a reshaping should include new advances within feminist economics and ecological economics that take as their starting point the socially responsible, sensible and accountable subject in creating an economy and economic theories that fully acknowledge care for each other as well as the planet. Effects Though no one knew they were in it at the time, the Great Recession had a significant economic and political impact on the United States. While the recession technically lasted from December 2007 – June 2009 (the nominal GDP trough), many important economic variables did not regain pre-recession (November or Q4 2007) levels until 2011–2016. For example, real GDP fell $650 billion (4.3%) and did not recover its $15 trillion pre-recession level until Q3 2011. Household net worth, which reflects the value of both stock markets and housing prices, fell $11.5 trillion (17.3%) and did not regain its pre-recession level of $66.4 trillion until Q3 2012. The number of persons with jobs (total non-farm payrolls) fell 8.6 million (6.2%) and did not regain the pre-recession level of 138.3 million until May 2014. The unemployment rate peaked at 10.0% in October 2009 and did not return to its pre-recession level of 4.7% until May 2016. A key dynamic slowing the recovery was that both individuals and businesses paid down debts for several years, as opposed to borrowing and spending or investing as had historically been the case. This shift to a private sector surplus drove a sizable government deficit. However, the federal government held spending at about $3.5 trillion from fiscal years 2009–2014 (thereby decreasing it as a percent of GDP), a form of austerity. Then-Fed Chair Ben Bernanke explained during November 2012 several of the economic headwinds that slowed the recovery: On the political front, widespread anger at banking bailouts and stimulus measures (begun by President George W. Bush and continued or expanded by President Obama) with few consequences for banking leadership, were a factor in driving the country politically rightward starting in 2010. The Troubled Asset Relief Program (TARP) was the largest of the bailouts. In 2008, TARP allocated $426.4 billion to various major financial institutions. However, the US collected $441.7 billion in return from these loans in 2010, recording a profit of $15.3 billion. Nonetheless, there was a political shift from the Democratic party. Examples include the rise of the Tea Party and the loss of Democratic majorities in subsequent elections. President Obama declared the bailout measures started under the Bush administration and continued during his administration as completed and mostly profitable as of December 2014[update]. As of January 2018[update], bailout funds had been fully recovered by the government, when interest on loans is taken into consideration. A total of $626B was invested, loaned, or granted due to various bailout measures, while $390B had been returned to the Treasury. The Treasury had earned another $323B in interest on bailout loans, resulting in an $87B profit. Economic and political commentators have argued the Great Recession was also an important factor in the rise of populist sentiment that resulted in the election of right-wing populist President Trump in 2016, and left-wing populist Bernie Sanders' candidacy for the Democratic nomination. The crisis in Europe generally progressed from banking system crises to sovereign debt crises, as many countries elected to bail out their banking systems using taxpayer money.[citation needed] Greece was different in that it faced large public debts rather than problems within its banking system. Several countries received bailout packages from the troika (European Commission, European Central Bank, International Monetary Fund), which also implemented a series of emergency measures. Many European countries embarked on austerity programs, reducing their budget deficits relative to GDP from 2010 to 2011. For example, according to the CIA World Factbook Greece improved its budget deficit from 10.4% GDP in 2010 to 9.6% in 2011. Iceland, Italy, Ireland, Portugal, France, and Spain also improved their budget deficits from 2010 to 2011 relative to GDP. However, with the exception of Germany, each of these countries had public-debt-to-GDP ratios that increased (i.e., worsened) from 2010 to 2011, as indicated in the chart at right. Greece's public-debt-to-GDP ratio increased from 143% in 2010 to 165% in 2011 to 185% in 2014. This indicates that despite improving budget deficits, GDP growth was not sufficient to support a decline (improvement) in the debt-to-GDP ratio for these countries during this period. Eurostat reported that the debt to GDP ratio for the 17 Euro area countries together was 70.1% in 2008, 79.9% in 2009, 85.3% in 2010, and 87.2% in 2011. According to the CIA World Factbook, from 2010 to 2011, the unemployment rates in Spain, Greece, Italy, Ireland, Portugal, and the UK increased. France had no significant changes, while in Germany and Iceland the unemployment rate declined. Eurostat reported that eurozone unemployment reached record levels in September 2012 at 11.6%, up from 10.3% the prior year. Unemployment varied significantly by country. Economist Martin Wolf analysed the relationship between cumulative GDP growth from 2008 to 2012 and total reduction in budget deficits due to austerity policies (see chart) in several European countries during April 2012. He concluded that: "In all, there is no evidence here that large fiscal contractions [budget deficit reductions] bring benefits to confidence and growth that offset the direct effects of the contractions. They bring exactly what one would expect: small contractions bring recessions and big contractions bring depressions." Changes in budget balances (deficits or surpluses) explained approximately 53% of the change in GDP, according to the equation derived from the IMF data used in his analysis. Economist Paul Krugman analysed the relationship between GDP and reduction in budget deficits for several European countries in April 2012 and concluded that austerity was slowing growth, similar to Martin Wolf. He also wrote: "... this also implies that 1 euro of austerity yields only about 0.4 euros of reduced deficit, even in the short run. No wonder, then, that the whole austerity enterprise is spiraling into disaster." Britain's decision to leave the European Union in 2016 has been partly attributed to the after-effects of the Great Recession on the country. During the Great Recession and in the immediate aftermath, Bangladesh, Ukraine, Honduras, Guatemala, Palestine, and Hong Kong went from democracies to a mix of democracy and authoritarianism and Madagascar, the Gambia, Ethiopia, Russia, and Fiji went from mixed regimes to authoritarian ones. While each country had democratic backsliding for different reasons, economic calamity has long been known to contribute to instability that can cause authoritarian forces to take hold. Poland was the only member of the European Union to avoid a GDP recession during the Great Recession. As of December 2009, the Polish economy had not entered recession nor even contracted, while its IMF 2010 GDP growth forecast of 1.9 percent was expected to be upgraded. Analysts identified several causes for the positive economic development in Poland: Extremely low levels of bank lending and a relatively small mortgage market; the relatively recent dismantling of EU trade barriers and the resulting surge in demand for Polish goods since 2004; Poland being the recipient of direct EU funding since 2004; lack of over-dependence on a single export sector; a tradition of government fiscal responsibility; a relatively large internal market; the free-floating Polish zloty; low labour costs attracting continued foreign direct investment; economic difficulties at the start of the decade, which prompted austerity measures in advance of the world crisis.[citation needed] While India, Uzbekistan, China, and Iran experienced slowing growth, they did not enter recessions. South Korea narrowly avoided technical recession in the first quarter of 2009. The International Energy Agency stated in mid September that South Korea could be the only large[clarify] OECD country to avoid recession for the whole of 2009. Australia avoided a technical recession after experiencing only one quarter of negative growth in the fourth quarter of 2008, with GDP returning to positive in the first quarter of 2009. The 2008 financial crisis did not affect developing countries to a great extent. Experts see several reasons: Africa was not affected because it is not fully integrated in the world market. Latin America and Asia seemed better prepared, since they have experienced crises before. In Latin America, for example, banking laws and regulations are very stringent. Bruno Wenn of the German DEG suggests that Western countries could learn from these countries when it comes to regulations of financial markets. Timeline of effects The few recessions appearing early in 2006–07 are commonly never associated to be part of the Great Recession, which is illustrated by the fact that only two countries (Iceland and Jamaica) were in recession in Q4 2007. One year before the maximum, in Q1 2008, only six countries were in recession (Iceland, Sweden, Finland, Ireland, Portugal and New Zealand). The number of countries in recession was 25 in Q2 2008, 39 in Q3 2008 and 53 in Q4 2008. At the steepest part of the Great Recession in Q1 2009, a total of 59 out of 71 countries were simultaneously in recession. The number of countries in recession was 37 in Q2 2009, 13 in Q3 2009 and 11 in Q4 2009. One year after the maximum, in Q1 2010, only seven countries were in recession (Greece, Croatia, Romania, Iceland, Jamaica, Venezuela and Belize). The recession data for the overall G20 zone (representing 85% of all GWP), depict that the Great Recession existed as a global recession throughout Q3 2008 until Q1 2009. Subsequent follow-up recessions in 2010–2013 were confined to Belize, El Salvador, Paraguay, Jamaica, Japan, Taiwan, New Zealand and 24 out of 50 European countries (including Greece). As of October 2014, only five out of the 71 countries with available quarterly data (Cyprus, Italy, Croatia, Belize and El Salvador), were still in ongoing recessions. The many follow-up recessions hitting the European countries, are commonly referred to as being direct repercussions of the European debt crisis. Iceland fell into an economic depression in 2008 following the collapse of its banking system (see 2008–2011 Icelandic financial crisis). By mid-2012 Iceland is regarded as one of Europe's recovery success stories largely as a result of a currency devaluation that has effectively reduced wages by 50%--making exports more competitive. The following countries had a recession starting in the fourth quarter of 2007: United States. The following countries had a recession starting in the first quarter of 2008: Latvia, Ireland, New Zealand, and Sweden. The following countries/territories had a recession starting in the second quarter of 2008: Japan, Hong Kong, Singapore, Italy, Turkey, Germany, United Kingdom, the eurozone, the European Union, and the OECD. The following countries/territories had a recession starting in the third quarter of 2008: Spain, and Taiwan. The following countries/territories had a recession starting in the fourth quarter of 2008: Switzerland. South Korea avoided recession with GDP returning positive at a 0.1% expansion in the first quarter of 2009. Of the seven largest economies in the world by GDP, only China avoided a recession in 2008. In the year to the third quarter of 2008 China grew by 9%. Until recently Chinese officials considered 8% GDP growth to be required simply to create enough jobs for rural people moving to urban centres. This figure may more accurately be considered to be 5–7% now[when?] that the main growth in working population is receding.[citation needed] Ukraine went into technical depression in January 2009 with a GDP growth of −20%, when comparing on a monthly basis with the GDP level in January 2008. Overall the Ukrainian real GDP fell 14.8% when comparing the entire part of 2009 with 2008. When measured quarter-on-quarter by changes of seasonally adjusted real GDP, Ukraine was more precisely in recession/depression throughout the four quarters from Q2-2008 until Q1-2009 (with respective qoq-changes of: -0.1%, -0.5%, -9.3%, -10.3%), and the two quarters from Q3-2012 until Q4-2012 (with respective qoq-changes of: -1.5% and -0.8%). Japan was in recovery in the middle of the decade 2000s but slipped back into recession and deflation in 2008. The recession in Japan intensified in the fourth quarter of 2008 with a GDP growth of −12.7%, and deepened further in the first quarter of 2009 with a GDP growth of −15.2%. Political instability related to the economic crisis On February 26, 2009, an Economic Intelligence Briefing was added to the daily intelligence briefings prepared for the President of the United States. This addition reflects the assessment of U.S. intelligence agencies that the 2008 financial crisis presented a serious threat to international stability. Business Week stated in March 2009 that global political instability was rising fast because of the 2008 financial crisis and is creating new challenges that need managing. The Associated Press reported in March 2009 that: United States "Director of National Intelligence Dennis Blair has said the economic weakness could lead to political instability in many developing nations." Even some developed countries are seeing political instability. NPR reports that David Gordon, a former intelligence officer who now leads research at the Eurasia Group, said: "Many, if not most, of the big countries out there have room to accommodate economic downturns without having large-scale political instability if we're in a recession of normal length. If you're in a much longer-run downturn, then all bets are off." Political scientists have argued that the economic stasis triggered social churning that got expressed through protests on a variety of issues across the developing world. In Brazil, disaffected youth rallied against a minor bus-fare hike and in Israel, they protested against high rents in Tel Aviv. In all these cases, the ostensible immediate cause of the protest was amplified by the underlying social suffering induced by the great recession. In January 2009, the government leaders of Iceland were forced to call elections two years early after the people of Iceland staged mass protests and clashed with the police because of the government's handling of the economy. Hundreds of thousands protested in France against President Sarkozy's economic policies. Prompted by the 2008 Latvian financial crisis, the opposition and trade unions there organised a rally against the cabinet of premier Ivars Godmanis. The rally gathered some 10–20 thousand people. In the evening the rally turned into a riot. The crowd moved to the building of the parliament and attempted to force their way into it, but were repelled by the state's police. In late February many Greeks took part in a massive general strike because of the economic situation and they shut down schools, airports, and many other services in Greece. Police and protesters clashed in Lithuania where people protesting the economic conditions were shot with rubber bullets. Communists and others rallied in Moscow to protest the Russian government's economic plans. However the impact was mild in Russia, whose economy gained from high oil prices. Asian countries saw various degrees of protest. Protests have also occurred in China as demands from the west for exports have been dramatically reduced and unemployment has increased. Beyond these initial protests, the protest movement has grown and continued in 2011. In late 2011, the Occupy Wall Street protest took place in the United States, spawning several offshoots that came to be known as the Occupy movement. In 2012 the economic difficulties in Spain increased support for secession movements. In Catalonia, support for the secession movement expanded. On September 11, a pro-independence march drew a crowd that police estimated at 1.5 million. Policy responses The 2008 financial crisis led to emergency interventions in many national financial systems. As the crisis developed into genuine recession in many major economies, economic stimulus meant to revive economic growth became the most common policy tool. After having implemented rescue plans for the banking system, major developed and emerging countries announced plans to relieve their economies. In particular, economic stimulus plans were announced in China, the United States, and the European Union. In the final quarter of 2008, the G-20 group of major economies assumed a new significance as a focus of economic and financial crisis management. The crisis accelerated the financialization of states around the world, as governments increased the use of market instruments to achieve public goals through approaches like bond issuance, securitization of state assets, and creating sovereign funds.: 9 The U.S. government passed the Emergency Economic Stabilization Act of 2008 (EESA or TARP) during October 2008. This law included $700 billion in funding for the "Troubled Assets Relief Program" (TARP). Following a model initiated by the United Kingdom bank rescue package, $205 billion was used in the Capital Purchase Program to lend funds to banks in exchange for dividend-paying preferred stock. On February 17, 2009, U.S. President Barack Obama signed the American Recovery and Reinvestment Act of 2009, an $787 billion stimulus package with a broad spectrum of spending and tax cuts. Over $75 billion of the package was specifically allocated to programs which help struggling homeowners. This program was referred to as the Homeowner Affordability and Stability Plan. The U.S. Federal Reserve (central bank) lowered interest rates and significantly expanded the money supply to help address the crisis. The New York Times reported in February 2013 that the Fed continued to support the economy with various monetary stimulus measures: "The Fed, which has amassed almost $3 trillion in Treasury and mortgage-backed securities to promote more borrowing and lending, is expanding those holdings by $85 billion a month until it sees clear improvement in the labor market. It plans to hold short-term interest rates near zero even longer, at least until the unemployment rate falls below 6.5 percent." The U.S. Federal reserve established some swap agreements to help banks' liquidity crisis, although this emergency liquidity only benefitted a dozen countries and excluded most developing economies.: 267 On September 15, 2008, China cut its interest rate for the first time since 2002. Indonesia reduced its overnight rate, at which commercial banks can borrow overnight funds from the central bank, by two percentage points to 10.25 percent. The Reserve Bank of Australia injected nearly $1.5 billion into the banking system, nearly three times as much as the market's estimated requirement. The Reserve Bank of India added almost $1.32 billion, through a refinance operation, its biggest in at least a month. On November 9, 2008, the Chinese economic stimulus program, a RMB¥ 4 trillion ($586 billion) stimulus package, was announced by the central government of the People's Republic of China in its biggest move to stop the 2008 financial crisis from affecting the world's second largest economy. A statement on the government's website said the State Council had approved a plan to invest 4 trillion yuan ($586 billion) in infrastructure and social welfare by the end of 2010. The stimulus package was invested in key areas such as housing, rural infrastructure, transportation, health and education, environment, industry, disaster rebuilding, income-building, tax cuts, and finance. China's massive stimulus was also an important contributor to overall global recovery.: 34 In addition to helping stabilize the global economy, China's stimulus and also provided an opportunity for China to retool its domestic infrastructure. Later that month, China's export driven economy was starting to feel the impact of the economic slowdown in the United States and Europe despite the government already cutting key interest rates three times in less than two months in a bid to spur economic expansion. On November 28, 2008, the Ministry of Finance of the People's Republic of China and the State Administration of Taxation jointly announced a rise in export tax rebate rates on some labour-intensive goods. These additional tax rebates took place on December 1, 2008. During the People's Bank of China helped address banks' liquidity crisis by signing swap agreements with numerous other countries to provide them with liquidity based on the renminbi.: 267 In Taiwan, the central bank on September 16, 2008, said it would cut its required reserve ratios for the first time in eight years. The central bank added $3.59 billion into the foreign-currency interbank market the same day. Bank of Japan pumped $29.3 billion into the financial system on September 17, 2008, and the Reserve Bank of Australia added $3.45 billion the same day. In developing and emerging economies, responses to the global crisis mainly consisted in low-rates monetary policy (Asia and the Middle East mainly) coupled with the depreciation of the currency against the dollar. There were also stimulus plans in some Asian countries, in the Middle East and in Argentina. In Asia, plans generally amounted to 1 to 3% of GDP, with the notable exception of China, which announced a plan accounting for 16% of GDP (6% of GDP per year). Until September 2008, European policy measures were limited to a small number of countries (Spain and Italy). In both countries, the measures were dedicated to households (tax rebates) reform of the taxation system to support specific sectors such as housing. The European Commission proposed a €200 billion stimulus plan to be implemented at the European level by the countries. At the beginning of 2009, the UK and Spain completed their initial plans, while Germany announced a new plan. On September 29, 2008, the Belgian, Luxembourg and Dutch authorities partially nationalised Fortis. The German government bailed out Hypo Real Estate. On October 8, 2008, the British Government announced a bank rescue package of around £500 billion ($850 billion at the time). The plan comprised three parts. The first £200 billion would be made in regard to the banks in liquidity stack. The second part would consist of the state government increasing the capital market within the banks. Along with this, £50 billion would be made available if the banks needed it, finally the government would write off any eligible lending between the British banks with a limit to £250 billion.[citation needed] In early December 2008, German Finance Minister Peer Steinbrück indicated a lack of belief in a "Great Rescue Plan" and reluctance to spend more money addressing the crisis. In March 2009, The European Union Presidency confirmed that the EU was at the time strongly resisting the US pressure to increase European budget deficits. From 2010, the United Kingdom began a fiscal consolidation program to reduce debt and deficit levels while at the same time stimulating economic recovery. Other European countries also began fiscal consolidation with similar aims. Most political responses to the 2008 financial crisis were taken, as seen above, by individual nations. Some coordination took place at the European level, but the need to cooperate at the global level has led leaders to activate the G-20 major economies entity. A first summit dedicated to the crisis took place, at the Heads of state level in November 2008 (2008 G-20 Washington summit). The G-20 countries met in a summit held in November 2008 in Washington to address the economic crisis. Apart from proposals on international financial regulation, they pledged to take measures to support their economy and to coordinate them, and refused any resort to protectionism. Another G-20 summit was held in London in April 2009. Finance ministers and central banks leaders of the G-20 met in Horsham, England, on March to prepare the summit, and pledged to restore global growth as soon as possible. They decided to coordinate their actions and to stimulate demand and employment. They also pledged to fight against all forms of protectionism and to maintain trade and foreign investments. These actions will cost $1.1tn. They also committed to maintain the supply of credit by providing more liquidity and recapitalising the banking system, and to implement rapidly the stimulus plans. As for central bankers, they pledged to maintain low-rates policies as long as necessary. Finally, the leaders decided to help emerging and developing countries, through a strengthening of the IMF. Policy recommendations The IMF stated in September 2010 that the 2008 financial crisis would not end without a major decrease in unemployment as hundreds of millions of people were unemployed worldwide. The IMF urged governments to expand social safety nets and to generate job creation even as they are under pressure to cut spending. The IMF also encouraged governments to invest in skills training for the unemployed and even governments of countries, similar to that of Greece, with major debt risk to first focus on long-term economic recovery by creating jobs. The Bank of Israel was the first to raise interest rates after the global recession began. It increased rates in August 2009. On October 6, 2009, Australia became the first G20 country to raise its main interest rate, with the Reserve Bank of Australia moving rates up from 3.00% to 3.25%. The Norges Bank of Norway and the Reserve Bank of India raised interest rates in March 2010. On November 2, 2017, the Bank of England raised interest rates for the first time since March 2009 from 0.25% to 0.5% in an attempt to curb inflation. Comparisons with the Great Depression On April 17, 2009, the then head of the IMF Dominique Strauss-Kahn said that there was a chance that certain countries may not implement the proper policies to avoid feedback mechanisms that could eventually turn the recession into a depression. "The free-fall in the global economy may be starting to abate, with a recovery emerging in 2010, but this depends crucially on the right policies being adopted today." The IMF pointed out that unlike the Great Depression, this recession was synchronised by global integration of markets. Such synchronized recessions were explained to last longer than typical economic downturns and have slower recoveries. Olivier Blanchard, IMF Chief Economist, stated that the percentage of workers laid off for long stints has been rising with each downturn for decades but the figures have surged this time. "Long-term unemployment is alarmingly high: in the United States, half the unemployed have been out of work for over six months, something we have not seen since the Great Depression." The IMF also stated that a link between rising inequality within Western economies and deflating demand may exist. The last time that the wealth gap reached such skewed extremes was in 1928–1929. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/OpenAI#cite_ref-35] | [TOKENS: 8773]
Contents OpenAI OpenAI is an American artificial intelligence research organization comprising both a non-profit foundation and a controlled for-profit public benefit corporation (PBC), headquartered in San Francisco. It aims to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". OpenAI is widely recognized for its development of the GPT family of large language models, the DALL-E series of text-to-image models, and the Sora series of text-to-video models, which have influenced industry research and commercial applications. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI. The organization was founded in 2015 in Delaware but evolved a complex corporate structure. As of October 2025, following restructuring approved by California and Delaware regulators, the non-profit OpenAI Foundation holds 26% of the for-profit OpenAI Group PBC, with Microsoft holding 27% and employees/other investors holding 47%. Under its governance arrangements, the OpenAI Foundation holds the authority to appoint the board of the for-profit OpenAI Group PBC, a mechanism designed to align the entity’s strategic direction with the Foundation’s charter. Microsoft previously invested over $13 billion into OpenAI, and provides Azure cloud computing resources. In October 2025, OpenAI conducted a $6.6 billion share sale that valued the company at $500 billion. In 2023 and 2024, OpenAI faced multiple lawsuits for alleged copyright infringement against authors and media companies whose work was used to train some of OpenAI's products. In November 2023, OpenAI's board removed Sam Altman as CEO, citing a lack of confidence in him, but reinstated him five days later following a reconstruction of the board. Throughout 2024, roughly half of then-employed AI safety researchers left OpenAI, citing the company's prominent role in an industry-wide problem. Founding In December 2015, OpenAI was founded as a not for profit organization by Sam Altman, Elon Musk, Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk as the co-chairs. A total of $1 billion in capital was pledged by Sam Altman, Greg Brockman, Elon Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), and Infosys. However, the actual capital collected significantly lagged pledges. According to company disclosures, only $130 million had been received by 2019. In its founding charter, OpenAI stated an intention to collaborate openly with other institutions by making certain patents and research publicly available, but later restricted access to its most capable models, citing competitive and safety concerns. OpenAI was initially run from Brockman's living room. It was later headquartered at the Pioneer Building in the Mission District, San Francisco. According to OpenAI's charter, its founding mission is "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity." Musk and Altman stated in 2015 that they were partly motivated by concerns about AI safety and existential risk from artificial general intelligence. OpenAI stated that "it's hard to fathom how much human-level AI could benefit society", and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly". The startup also wrote that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible", and that "because of AI's surprising history, it's hard to predict when human-level AI might come within reach. When it does, it'll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest." Co-chair Sam Altman expected a decades-long project that eventually surpasses human intelligence. Brockman met with Yoshua Bengio, one of the "founding fathers" of deep learning, and drew up a list of great AI researchers. Brockman was able to hire nine of them as the first employees in December 2015. OpenAI did not pay AI researchers salaries comparable to those of Facebook or Google. It also did not pay stock options which AI researchers typically get. Nevertheless, OpenAI spent $7 million on its first 52 employees in 2016. OpenAI's potential and mission drew these researchers to the firm; a Google employee said he was willing to leave Google for OpenAI "partly because of the very strong group of people and, to a very large extent, because of its mission." OpenAI co-founder Wojciech Zaremba stated that he turned down "borderline crazy" offers of two to three times his market value to join OpenAI instead. In April 2016, OpenAI released a public beta of "OpenAI Gym", its platform for reinforcement learning research. Nvidia gifted its first DGX-1 supercomputer to OpenAI in August 2016 to help it train larger and more complex AI models with the capability of reducing processing time from six days to two hours. In December 2016, OpenAI released "Universe", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites, and other applications. Corporate structure In 2019, OpenAI transitioned from non-profit to "capped" for-profit, with the profit being capped at 100 times any investment. According to OpenAI, the capped-profit model allows OpenAI Global, LLC to legally attract investment from venture funds and, in addition, to grant employees stakes in the company. Many top researchers work for Google Brain, DeepMind, or Facebook, which offer equity that a nonprofit would be unable to match. Before the transition, OpenAI was legally required to publicly disclose the compensation of its top employees. The company then distributed equity to its employees and partnered with Microsoft, announcing an investment package of $1 billion into the company. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft. OpenAI Global, LLC then announced its intention to commercially license its technologies. It planned to spend $1 billion "within five years, and possibly much faster". Altman stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need "more capital than any non-profit has ever raised" to achieve artificial general intelligence. The nonprofit, OpenAI, Inc., is the sole controlling shareholder of OpenAI Global, LLC, which, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI, Inc.'s nonprofit charter. A majority of OpenAI, Inc.'s board is barred from having financial stakes in OpenAI Global, LLC. In addition, minority members with a stake in OpenAI Global, LLC are barred from certain votes due to conflict of interest. Some researchers have argued that OpenAI Global, LLC's switch to for-profit status is inconsistent with OpenAI's claims to be "democratizing" AI. On February 29, 2024, Elon Musk filed a lawsuit against OpenAI and CEO Sam Altman, accusing them of shifting focus from public benefit to profit maximization—a case OpenAI dismissed as "incoherent" and "frivolous," though Musk later revived legal action against Altman and others in August. On April 9, 2024, OpenAI countersued Musk in federal court, alleging that he had engaged in "bad-faith tactics" to slow the company's progress and seize its innovations for his personal benefit. OpenAI also argued that Musk had previously supported the creation of a for-profit structure and had expressed interest in controlling OpenAI himself. The countersuit seeks damages and legal measures to prevent further alleged interference. On February 10, 2025, a consortium of investors led by Elon Musk submitted a $97.4 billion unsolicited bid to buy the nonprofit that controls OpenAI, declaring willingness to match or exceed any better offer. The offer was rejected on 14 February 2025, with OpenAI stating that it was not for sale, but the offer complicated Altman's restructuring plan by suggesting a lower bar for how much the nonprofit should be valued. OpenAI, Inc. was originally designed as a nonprofit in order to ensure that AGI "benefits all of humanity" rather than "the private gain of any person". In 2019, it created OpenAI Global, LLC, a capped-profit subsidiary controlled by the nonprofit. In December 2024, OpenAI proposed a restructuring plan to convert the capped-profit into a Delaware-based public benefit corporation (PBC), and to release it from the control of the nonprofit. The nonprofit would sell its control and other assets, getting equity in return, and would use it to fund and pursue separate charitable projects, including in science and education. OpenAI's leadership described the change as necessary to secure additional investments, and claimed that the nonprofit's founding mission to ensure AGI "benefits all of humanity" would be better fulfilled. The plan has been criticized by former employees. A legal letter named "Not For Private Gain" asked the attorneys general of California and Delaware to intervene, stating that the restructuring is illegal and would remove governance safeguards from the nonprofit and the attorneys general. The letter argues that OpenAI's complex structure was deliberately designed to remain accountable to its mission, without the conflicting pressure of maximizing profits. It contends that the nonprofit is best positioned to advance its mission of ensuring AGI benefits all of humanity by continuing to control OpenAI Global, LLC, whatever the amount of equity that it could get in exchange. PBCs can choose how they balance their mission with profit-making. Controlling shareholders have a large influence on how closely a PBC sticks to its mission. On October 28, 2025, OpenAI announced that it had adopted the new PBC corporate structure after receiving approval from the attorneys general of California and Delaware. Under the new structure, OpenAI's for-profit branch became a public benefit corporation known as OpenAI Group PBC, while the non-profit was renamed to the OpenAI Foundation. The OpenAI Foundation holds a 26% stake in the PBC, while Microsoft holds a 27% stake and the remaining 47% is owned by employees and other investors. All members of the OpenAI Group PBC board of directors will be appointed by the OpenAI Foundation, which can remove them at any time. Members of the Foundation's board will also serve on the for-profit board. The new structure allows the for-profit PBC to raise investor funds like most traditional tech companies, including through an initial public offering, which Altman claimed was the most likely path forward. In January 2023, OpenAI Global, LLC was in talks for funding that would value the company at $29 billion, double its 2021 value. On January 23, 2023, Microsoft announced a new US$10 billion investment in OpenAI Global, LLC over multiple years, partially needed to use Microsoft's cloud-computing service Azure. From September to December, 2023, Microsoft rebranded all variants of its Copilot to Microsoft Copilot, and they added MS-Copilot to many installations of Windows and released Microsoft Copilot mobile apps. Following OpenAI's 2025 restructuring, Microsoft owns a 27% stake in the for-profit OpenAI Group PBC, valued at $135 billion. In a deal announced the same day, OpenAI agreed to purchase $250 billion of Azure services, with Microsoft ceding their right of first refusal over OpenAI's future cloud computing purchases. As part of the deal, OpenAI will continue to share 20% of its revenue with Microsoft until it achieves AGI, which must now be verified by an independent panel of experts. The deal also loosened restrictions on both companies working with third parties, allowing Microsoft to pursue AGI independently and allowing OpenAI to develop products with other companies. In 2017, OpenAI spent $7.9 million, a quarter of its functional expenses, on cloud computing alone. In comparison, DeepMind's total expenses in 2017 were $442 million. In the summer of 2018, training OpenAI's Dota 2 bots required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks. In October 2024, OpenAI completed a $6.6 billion capital raise with a $157 billion valuation including investments from Microsoft, Nvidia, and SoftBank. On January 21, 2025, Donald Trump announced The Stargate Project, a joint venture between OpenAI, Oracle, SoftBank and MGX to build an AI infrastructure system in conjunction with the US government. The project takes its name from OpenAI's existing "Stargate" supercomputer project and is estimated to cost $500 billion. The partners planned to fund the project over the next four years. In July, the United States Department of Defense announced that OpenAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and xAI. In the same month, the company made a deal with the UK Government to use ChatGPT and other AI tools in public services. OpenAI subsequently began a $50 million fund to support nonprofit and community organizations. In April 2025, OpenAI raised $40 billion at a $300 billion post-money valuation, which was the highest-value private technology deal in history. The financing round was led by SoftBank, with other participants including Microsoft, Coatue, Altimeter and Thrive. In July 2025, the company reported annualized revenue of $12 billion. This was an increase from $3.7 billion in 2024, which was driven by ChatGPT subscriptions, which reached 20 million paid subscribers by April 2025, up from 15.5 million at the end of 2024, alongside a rapidly expanding enterprise customer base that grew to five million business users. The company’s cash burn remains high because of the intensive computational costs required to train and operate large language models. It projects an $8 billion operating loss in 2025. OpenAI reports revised long-term spending projections totaling approximately $115 billion through 2029, with annual expenditures projected to escalate significantly, reaching $17 billion in 2026, $35 billion in 2027, and $45 billion in 2028. These expenditures are primarily allocated toward expanding compute infrastructure, developing proprietary AI chips, constructing data centers, and funding intensive model training programs, with more than half of the spending through the end of the decade expected to support research-intensive compute for model training and development. The company's financial strategy prioritizes market expansion and technological advancement over near-term profitability, with OpenAI targeting cash-flow-positive operations by 2029 and projecting revenue of approximately $200 billion by 2030. This aggressive spending trajectory underscores both the enormous capital requirements of scaling cutting-edge AI technology and OpenAI's commitment to maintaining its position as a leader in the artificial intelligence industry. In October 2025, OpenAI completed an employee share sale of up to $10 billion to existing investors which valued the company at $500 billion. The deal values OpenAI as the most valuable privately owned company in the world—surpassing SpaceX as the world's most valuable private company. On November 17, 2023, Sam Altman was removed as CEO when its board of directors (composed of Helen Toner, Ilya Sutskever, Adam D'Angelo and Tasha McCauley) cited a lack of confidence in him. Chief Technology Officer Mira Murati took over as interim CEO. Greg Brockman, the president of OpenAI, was also removed as chairman of the board and resigned from the company's presidency shortly thereafter. Three senior OpenAI researchers subsequently resigned: director of research and GPT-4 lead Jakub Pachocki, head of AI risk Aleksander Mądry, and researcher Szymon Sidor. On November 18, 2023, there were reportedly talks of Altman returning as CEO amid pressure placed upon the board by investors such as Microsoft and Thrive Capital, who objected to Altman's departure. Although Altman himself spoke in favor of returning to OpenAI, he has since stated that he considered starting a new company and bringing former OpenAI employees with him if talks to reinstate him didn't work out. The board members agreed "in principle" to resign if Altman returned. On November 19, 2023, negotiations with Altman to return failed and Murati was replaced by Emmett Shear as interim CEO. The board initially contacted Anthropic CEO Dario Amodei (a former OpenAI executive) about replacing Altman, and proposed a merger of the two companies, but both offers were declined. On November 20, 2023, Microsoft CEO Satya Nadella announced Altman and Brockman would be joining Microsoft to lead a new advanced AI research team, but added that they were still committed to OpenAI despite recent events. Before the partnership with Microsoft was finalized, Altman gave the board another opportunity to negotiate with him. About 738 of OpenAI's 770 employees, including Murati and Sutskever, signed an open letter stating they would quit their jobs and join Microsoft if the board did not rehire Altman and then resign. This prompted OpenAI investors to consider legal action against the board as well. In response, OpenAI management sent an internal memo to employees stating that negotiations with Altman and the board had resumed and would take some time. On November 21, 2023, after continued negotiations, Altman and Brockman returned to the company in their prior roles along with a reconstructed board made up of new members Bret Taylor (as chairman) and Lawrence Summers, with D'Angelo remaining. According to subsequent reporting, shortly before Altman’s firing, some employees raised concerns to the board about how he had handled the safety implications of a recent internal AI capability discovery. On November 29, 2023, OpenAI announced that an anonymous Microsoft employee had joined the board as a non-voting member to observe the company's operations; Microsoft resigned from the board in July 2024. In February 2024, the Securities and Exchange Commission subpoenaed OpenAI's internal communication to determine if Altman's alleged lack of candor misled investors. In 2024, following the temporary removal of Sam Altman and his return, many employees gradually left OpenAI, including most of the original leadership team and a significant number of AI safety researchers. In August 2023, it was announced that OpenAI had acquired the New York-based start-up Global Illumination, a company that deploys AI to develop digital infrastructure and creative tools. In June 2024, OpenAI acquired Multi, a startup focused on remote collaboration. In March 2025, OpenAI reached a deal with CoreWeave to acquire $350 million worth of CoreWeave shares and access to AI infrastructure, in return for $11.9 billion paid over five years. Microsoft was already CoreWeave's biggest customer in 2024. Alongside their other business dealings, OpenAI and Microsoft were renegotiating the terms of their partnership to facilitate a potential future initial public offering by OpenAI, while ensuring Microsoft's continued access to advanced AI models. On May 21, OpenAI announced the $6.5 billion acquisition of io, an AI hardware start-up founded by former Apple designer Jony Ive in 2024. In September 2025, OpenAI agreed to acquire the product testing startup Statsig for $1.1 billion in an all-stock deal and appointed Statsig's founding CEO Vijaye Raji as OpenAI's chief technology officer of applications. The company also announced development of an AI-driven hiring service designed to rival LinkedIn. OpenAI acquired personal finance app Roi in October 2025. In October 2025, OpenAI acquired Software Applications Incorporated, the developer of Sky, a macOS-based natural language interface designed to operate across desktop applications. The Sky team joined OpenAI, and the company announced plans to integrate Sky’s capabilities into ChatGPT. In December 2025, it was announced OpenAI had agreed to acquire Neptune, an AI tooling startup that helps companies track and manage model training, for an undisclosed amount. In January 2026, it was announced OpenAI had acquired healthcare technology startup Torch for approximately $60 million. The acquisition followed the launch of OpenAI’s ChatGPT Health product and was intended to strengthen the company’s medical data and healthcare artificial intelligence capabilities. OpenAI has been criticized for outsourcing the annotation of data sets to Sama, a company based in San Francisco that employed workers in Kenya. These annotations were used to train an AI model to detect toxicity, which could then be used to moderate toxic content, notably from ChatGPT's training data and outputs. However, these pieces of text usually contained detailed descriptions of various types of violence, including sexual violence. The investigation uncovered that OpenAI began sending snippets of data to Sama as early as November 2021. The four Sama employees interviewed by Time described themselves as mentally scarred. OpenAI paid Sama $12.50 per hour of work, and Sama was redistributing the equivalent of between $1.32 and $2.00 per hour post-tax to its annotators. Sama's spokesperson said that the $12.50 was also covering other implicit costs, among which were infrastructure expenses, quality assurance and management. In 2024, OpenAI began collaborating with Broadcom to design a custom AI chip capable of both training and inference, targeted for mass production in 2026 and to be manufactured by TSMC on a 3 nm process node. This initiative intended to reduce OpenAI's dependence on Nvidia GPUs, which are costly and face high demand in the market. In January 2024, Arizona State University purchased ChatGPT Enterprise in OpenAI's first deal with a university. In June 2024, Apple Inc. signed a contract with OpenAI to integrate ChatGPT features into its products as part of its new Apple Intelligence initiative. In June 2025, OpenAI began renting Google Cloud's Tensor Processing Units (TPUs) to support ChatGPT and related services, marking its first meaningful use of non‑Nvidia AI chips. In September 2025, it was revealed that OpenAI signed a contract with Oracle to purchase $300 billion in computing power over the next five years. In September 2025, OpenAI and NVIDIA announced a memorandum of understanding that included a potential deployment of at least 10 gigawatts of NVIDIA systems and a $100 billion investment from NVIDIA in OpenAI. OpenAI expected the negotiations to be completed within weeks. As of January 2026, this has not been realized, and the two sides are rethinking the future of their partnership. In October 2025, OpenAI announced a multi-billion dollar deal with AMD. OpenAI committed to purchasing six gigawatts worth of AMD chips, starting with the MI450. OpenAI will have the option to buy up to 160 million shares of AMD, about 10% of the company, depending on development, performance and share price targets. In December 2025, Disney said it would make a $1 billion investment in OpenAI, and signed a three-year licensing deal that will let users generate videos using Sora—OpenAI's short-form AI video platform. More than 200 Disney, Marvel, Star Wars and Pixar characters will be available to OpenAI users. In early 2026, Amazon entered advanced discussions to invest up to $50 billion in OpenAI as part of a potential artificial intelligence partnership. Under the proposed agreement, OpenAI’s models could be integrated into Amazon’s digital assistant Alexa and other internal projects. OpenAI provides LLMs to the Artificial Intelligence Cyber Challenge and to the Advanced Research Projects Agency for Health. In October 2024, The Intercept revealed that OpenAI's tools are considered "essential" for AFRICOM's mission and included in an "Exception to Fair Opportunity" contractual agreement between the United States Department of Defense and Microsoft. In December 2024, OpenAI said it would partner with defense-tech company Anduril to build drone defense technologies for the United States and its allies. In 2025, OpenAI's Chief Product Officer, Kevin Weil, was commissioned lieutenant colonel in the U.S. Army to join Detachment 201 as senior advisor. In June 2025, the U.S. Department of Defense awarded OpenAI a $200 million one-year contract to develop AI tools for military and national security applications. OpenAI announced a new program, OpenAI for Government, to give federal, state, and local governments access to its models, including ChatGPT. Services In February 2019, GPT-2 was announced, which gained attention for its ability to generate human-like text. In 2020, OpenAI announced GPT-3, a language model trained on large internet datasets. GPT-3 is aimed at natural language answering questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named the API, would form the heart of its first commercial product. Eleven employees left OpenAI, mostly between December 2020 and January 2021, in order to establish Anthropic. In 2021, OpenAI introduced DALL-E, a specialized deep learning model adept at generating complex digital images from textual descriptions, utilizing a variant of the GPT-3 architecture. In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days. According to anonymous sources cited by Reuters in December 2022, OpenAI Global, LLC was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024. After ChatGPT was launched, Google announced a similar chatbot, Bard, amid internal concerns that ChatGPT could threaten Google’s position as a primary source of online information. On February 7, 2023, Microsoft announced that it was building AI technology based on the same foundation as ChatGPT into Microsoft Bing, Edge, Microsoft 365 and other products. On March 14, 2023, OpenAI released GPT-4, both as an API (with a waitlist) and as a feature of ChatGPT Plus. On November 6, 2023, OpenAI launched GPTs, allowing individuals to create customized versions of ChatGPT for specific purposes, further expanding the possibilities of AI applications across various industries. On November 14, 2023, OpenAI announced they temporarily suspended new sign-ups for ChatGPT Plus due to high demand. Access for newer subscribers re-opened a month later on December 13. In December 2024, the company launched the Sora model. It also launched OpenAI o1, an early reasoning model that was internally codenamed strawberry. Additionally, ChatGPT Pro—a $200/month subscription service offering unlimited o1 access and enhanced voice features—was introduced, and preliminary benchmark results for the upcoming OpenAI o3 models were shared. On January 23, 2025, OpenAI released Operator, an AI agent and web automation tool for accessing websites to execute goals defined by users. The feature was only available to Pro users in the United States. OpenAI released deep research agent, nine days later. It scored a 27% accuracy on the benchmark Humanity's Last Exam (HLE). Altman later stated GPT-4.5 would be the last model without full chain-of-thought reasoning. In July 2025, reports indicated that AI models by both OpenAI and Google DeepMind solved mathematics problems at the level of top-performing students in the International Mathematical Olympiad. OpenAI's large language model was able to achieve gold medal-level performance, reflecting significant progress in AI's reasoning abilities. On October 6, 2025, OpenAI unveiled its Agent Builder platform during the company's DevDay event. The platform includes a visual drag-and-drop interface that lets developers and businesses design, test, and deploy agentic workflows with limited coding. On October 21, 2025, OpenAI introduced ChatGPT Atlas, a browser integrating the ChatGPT assistant directly into web navigation, to compete with existing browsers such as Google Chrome and Apple Safari. On December 11, 2025, OpenAI announced GPT-5.2. This model will be better at creating spreadsheets, building presentations, perceiving images, writing code and understanding long context. On January 27, 2026, OpenAI introduced Prism, a LaTeX-native workspace meant to assist scientists to help with research and writing. The platform utilizes GPT-5.2 as a backend to automate the process of drafting for scientific papers, including features for managing citations, complex equation formatting, and real-time collaborative editing. In March 2023, the company was criticized for disclosing particularly few technical details about products like GPT-4, contradicting its initial commitment to openness and making it harder for independent researchers to replicate its work and develop safeguards. OpenAI cited competitiveness and safety concerns to justify this repudiation. OpenAI's former chief scientist Ilya Sutskever argued in 2023 that open-sourcing increasingly capable models was increasingly risky, and that the safety reasons for not open-sourcing the most potent AI models would become "obvious" in a few years. In September 2025, OpenAI published a study on how people use ChatGPT for everyday tasks. The study found that "non-work tasks" (according to an LLM-based classifier) account for more than 72 percent of all ChatGPT usage, with a minority of overall usage related to business productivity. In July 2023, OpenAI launched the superalignment project, aiming within four years to determine how to align future superintelligent systems. OpenAI promised to dedicate 20% of its computing resources to the project, although the team denied receiving anything close to 20%. OpenAI ended the project in May 2024 after its co-leaders Ilya Sutskever and Jan Leike left the company. In August 2025, OpenAI was criticized after thousands of private ChatGPT conversations were inadvertently exposed to public search engines like Google due to an experimental "share with search engines" feature. The opt-in toggle, intended to allow users to make specific chats discoverable, resulted in some discussions including personal details such as names, locations, and intimate topics appearing in search results when users accidentally enabled it while sharing links. OpenAI announced the feature's permanent removal on August 1, 2025, and the company began coordinating with search providers to remove the exposed content, emphasizing that it was not a security breach but a design flaw that heightened privacy risks. CEO Sam Altman acknowledged the issue in a podcast, noting users often treat ChatGPT as a confidant for deeply personal matters, which amplified concerns about AI handling sensitive data. Management In 2018, Musk resigned from his Board of Directors seat, citing "a potential future conflict [of interest]" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars. OpenAI stated that Musk's financial contributions were below $45 million. On March 3, 2023, Reid Hoffman resigned from his board seat, citing a desire to avoid conflicts of interest with his investments in AI companies via Greylock Partners, and his co-founding of the AI startup Inflection AI. Hoffman remained on the board of Microsoft, a major investor in OpenAI. In May 2024, Chief Scientist Ilya Sutskever resigned and was succeeded by Jakub Pachocki. Co-leader Jan Leike also departed amid concerns over safety and trust. OpenAI then signed deals with Reddit, News Corp, Axios, and Vox Media. Paul Nakasone then joined the board of OpenAI. In August 2024, cofounder John Schulman left OpenAI to join Anthropic, and OpenAI's president Greg Brockman took extended leave until November. In September 2024, CTO Mira Murati left the company. In November 2025, Lawrence Summers resigned from the board of directors. Governance and legal issues In May 2023, Sam Altman, Greg Brockman and Ilya Sutskever posted recommendations for the governance of superintelligence. They stated that superintelligence could happen within the next 10 years, allowing a "dramatically more prosperous future" and that "given the possibility of existential risk, we can't just be reactive". They proposed creating an international watchdog organization similar to IAEA to oversee AI systems above a certain capability threshold, suggesting that relatively weak AI systems on the other side should not be overly regulated. They also called for more technical safety research for superintelligences, and asked for more coordination, for example through governments launching a joint project which "many current efforts become part of". In July 2023, the FTC issued a civil investigative demand to OpenAI to investigate whether the company's data security and privacy practices to develop ChatGPT were unfair or harmed consumers (including by reputational harm) in violation of Section 5 of the Federal Trade Commission Act of 1914. These are typically preliminary investigative matters and are nonpublic, but the FTC's document was leaked. In July 2023, the FTC launched an investigation into OpenAI over allegations that the company scraped public data and published false and defamatory information. They asked OpenAI for comprehensive information about its technology and privacy safeguards, as well as any steps taken to prevent the recurrence of situations in which its chatbot generated false and derogatory content about people. The agency also raised concerns about ‘circular’ spending arrangements—for example, Microsoft extending Azure credits to OpenAI while both companies shared engineering talent—and warned that such structures could negatively affect the public. In September 2024, OpenAI's global affairs chief endorsed the UK's "smart" AI regulation during testimony to a House of Lords committee. In February 2025, OpenAI CEO Sam Altman stated that the company is interested in collaborating with the People's Republic of China, despite regulatory restrictions imposed by the U.S. government. This shift comes in response to the growing influence of the Chinese artificial intelligence company DeepSeek, which has disrupted the AI market with open models, including DeepSeek V3 and DeepSeek R1. Following DeepSeek's market emergence, OpenAI enhanced security protocols to protect proprietary development techniques from industrial espionage. Some industry observers noted similarities between DeepSeek's model distillation approach and OpenAI's methodology, though no formal intellectual property claim was filed. According to Oliver Roberts, in March 2025, the United States had 781 state AI bills or laws. OpenAI advocated for preempting state AI laws with federal laws. According to Scott Kohler, OpenAI has opposed California's AI legislation and suggested that the state bill encroaches on a more competent federal government. Public Citizen opposed a federal preemption on AI and pointed to OpenAI's growth and valuation as evidence that existing state laws have not hampered innovation. Before May 2024, OpenAI required departing employees to sign a lifelong non-disparagement agreement forbidding them from criticizing OpenAI and acknowledging the existence of the agreement. Daniel Kokotajlo, a former employee, publicly stated that he forfeited his vested equity in OpenAI in order to leave without signing the agreement. Sam Altman stated that he was unaware of the equity cancellation provision, and that OpenAI never enforced it to cancel any employee's vested equity. However, leaked documents and emails refute this claim. On May 23, 2024, OpenAI sent a memo releasing former employees from the agreement. OpenAI was sued for copyright infringement by authors Sarah Silverman, Matthew Butterick, Paul Tremblay and Mona Awad in July 2023. In September 2023, 17 authors, including George R. R. Martin, John Grisham, Jodi Picoult and Jonathan Franzen, joined the Authors Guild in filing a class action lawsuit against OpenAI, alleging that the company's technology was illegally using their copyrighted work. The New York Times also sued the company in late December 2023. In May 2024 it was revealed that OpenAI had destroyed its Books1 and Books2 training datasets, which were used in the training of GPT-3, and which the Authors Guild believed to have contained over 100,000 copyrighted books. In 2021, OpenAI developed a speech recognition tool called Whisper. OpenAI used it to transcribe more than one million hours of YouTube videos into text for training GPT-4. The automated transcription of YouTube videos raised concerns within OpenAI employees regarding potential violations of YouTube's terms of service, which prohibit the use of videos for applications independent of the platform, as well as any type of automated access to its videos. Despite these concerns, the project proceeded with notable involvement from OpenAI's president, Greg Brockman. The resulting dataset proved instrumental in training GPT-4. In February 2024, The Intercept as well as Raw Story and Alternate Media Inc. filed lawsuit against OpenAI on copyright litigation ground. The lawsuit is said to have charted a new legal strategy for digital-only publishers to sue OpenAI. On April 30, 2024, eight newspapers filed a lawsuit in the Southern District of New York against OpenAI and Microsoft, claiming illegal harvesting of their copyrighted articles. The suing publications included The Mercury News, The Denver Post, The Orange County Register, St. Paul Pioneer Press, Chicago Tribune, Orlando Sentinel, Sun Sentinel, and New York Daily News. In June 2023, a lawsuit claimed that OpenAI scraped 300 billion words online without consent and without registering as a data broker. It was filed in San Francisco, California, by sixteen anonymous plaintiffs. They also claimed that OpenAI and its partner as well as customer Microsoft continued to unlawfully collect and use personal data from millions of consumers worldwide to train artificial intelligence models. On May 22, 2024, OpenAI entered into an agreement with News Corp to integrate news content from The Wall Street Journal, the New York Post, The Times, and The Sunday Times into its AI platform. Meanwhile, other publications like The New York Times chose to sue OpenAI and Microsoft for copyright infringement over the use of their content to train AI models. In November 2024, a coalition of Canadian news outlets, including the Toronto Star, Metroland Media, Postmedia, The Globe and Mail, The Canadian Press and CBC, sued OpenAI for using their news articles to train its software without permission. In October 2024 during a New York Times interview, Suchir Balaji accused OpenAI of violating copyright law in developing its commercial LLMs which he had helped engineer. He was a likely witness in a major copyright trial against the AI company, and was one of several of its current or former employees named in court filings as potentially having documents relevant to the case. On November 26, 2024, Balaji died by suicide. His death prompted the circulation of conspiracy theories alleging that he had been deliberately silenced. California Congressman Ro Khanna endorsed calls for an investigation. On April 24, 2025, Ziff Davis sued OpenAI in Delaware federal court for copyright infringement. Ziff Davis is known for publications such as ZDNet, PCMag, CNET, IGN and Lifehacker. In April 2023, the EU's European Data Protection Board (EDPB) formed a dedicated task force on ChatGPT "to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities" based on the "enforcement action undertaken by the Italian data protection authority against OpenAI about the ChatGPT service". In late April 2024 NOYB filed a complaint with the Austrian Datenschutzbehörde against OpenAI for violating the European General Data Protection Regulation. A text created with ChatGPT gave a false date of birth for a living person without giving the individual the option to see the personal data used in the process. A request to correct the mistake was denied. Additionally, neither the recipients of ChatGPT's work nor the sources used, could be made available, OpenAI claimed. OpenAI was criticized for lifting its ban on using ChatGPT for "military and warfare". Up until January 10, 2024, its "usage policies" included a ban on "activity that has high risk of physical harm, including", specifically, "weapons development" and "military and warfare". Its new policies prohibit "[using] our service to harm yourself or others" and to "develop or use weapons". In August 2025, the parents of a 16-year-old boy who died by suicide filed a wrongful death lawsuit against OpenAI (and CEO Sam Altman), alleging that months of conversations with ChatGPT about mental health and methods of self-harm contributed to their son's death and that safeguards were inadequate for minors. OpenAI expressed condolences and said it was strengthening protections (including updated crisis response behavior and parental controls). Coverage described it as a first-of-its-kind wrongful death case targeting the company's chatbot. The complaint was filed in California state court in San Francisco. In November 2025, the Social Media Victims Law Center and Tech Justice Law Project filed seven lawsuits against OpenAI, of which four lawsuits alleged wrongful death. The suits were filed on behalf of Zane Shamblin, 23, of Texas; Amaurie Lacey, 17, of Georgia; Joshua Enneking, 26, of Florida; and Joe Ceccanti, 48, of Oregon, who each committed suicide after prolonged ChatGPT usage. In December 2025, Stein-Erik Soelberg, who was 56 years old at the time, allegedly murdered his mother Suzanne Adams. In the months prior the paranoid, delusional man often discussed his ideas with ChatGPT. Adam's estate then sued OpenAI claiming that the company shared responsibility due to the risk of chatbot psychosis despite the fact that chatbot psychosis is not a real medical diagnosis. OpenAI responded saying they will make ChatGPT safer for users disconnected from reality. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Praying_mantis] | [TOKENS: 5778]
Contents Mantis See text Mantises are an order (Mantodea) of insects that contains over 2,400 species in about 460 genera in 33 families. The largest family is the Mantidae ("mantids"). Mantises are distributed worldwide in temperate and tropical habitats. They have triangular heads with bulging eyes supported on flexible necks. Their elongated bodies may or may not have wings, but all mantodeans have forelegs that are greatly enlarged and adapted for catching and gripping prey; their upright posture, while remaining stationary with forearms folded, resembling a praying posture, has led to the common name praying mantis. The closest relatives of mantises are termites and cockroaches (Blattodea), which are all within the superorder Dictyoptera. Mantises are sometimes confused with stick insects (Phasmatodea), other elongated insects such as grasshoppers (Orthoptera), or other more distantly related insects with raptorial forelegs such as mantisflies (Mantispidae). Mantises are mostly ambush predators, but a few ground-dwelling species are found actively pursuing their prey. They normally live for about a year. In cooler climates, the adults lay eggs in autumn, then die. The eggs are protected by their hard capsules and hatch in the spring. Females sometimes practice sexual cannibalism, eating their mates after copulation. Mantises were considered to have supernatural powers by early civilizations, including ancient Greece, ancient Egypt, and Assyria. A cultural trope popular in cartoons imagines the female mantis as a femme fatale. Mantises are among the insects most commonly kept as pets. Etymology The name mantodea is formed from the Ancient Greek words μάντις (mántis) meaning "prophet", and εἶδος (eîdos) meaning "form" or "type". It was coined in 1838 by the German entomologist Hermann Burmeister. The name "mantid" properly refers only to members of the family Mantidae, which was, historically, the only family in the order. The other common name, praying mantis, applied to any species in the order (though in Europe mainly to Mantis religiosa), comes from the typical prayer-like posture these mantises adopt when their forelegs are folded. The vernacular plural "mantises" (used in this article) was originally confined largely to the US, with "mantids" predominantly used as the plural in the UK and elsewhere, until the family Mantidae was further split in 2002; at present, only some 75 out of 430 known genera are mantids, the rest are in 28 other families, and therefore no longer "mantids". Taxonomy and evolution Over 2,400 species of mantises in about 430 genera are recognized. They are predominantly found in tropical regions, but some live in temperate areas. The systematics of mantises have long been disputed. Mantises, along with stick insects (Phasmatodea), were once placed in the order Orthoptera with the cockroaches (now Blattodea) and ice crawlers (now Grylloblattodea). Kristensen (1991) combined the Mantodea with the cockroaches and termites into the order Dictyoptera, suborder Mantodea. Evolutionary relationships based on Evangelista et al. 2019 are shown in the cladogram: (Mantises) (Cockroaches and termites) One of the earliest classifications splitting an all-inclusive Mantidae into multiple families was that proposed by Beier in 1968, recognizing eight families, though it was not until Ehrmann's reclassification into 15 families in 2002 that a multiple-family classification became universally adopted. Klass, in 1997, studied the external male genitalia and postulated that the families Chaeteessidae and Metallyticidae diverged from the other families at an early date. However, as previously configured, the Mantidae and Thespidae especially were considered polyphyletic, so the Mantodea have been revised substantially as of 2019 and now includes 29 families. † Extinct Genera Chaeteessidae Mantoididae Metallyticidae Thespidae Angelidae Coptopterygidae Liturgusidae Photinaidae Acanthopidae Chroicopteridae Leptomantellidae Amorphoscelidae Nanomantidae Gonypetidae Epaphroditidae Majangidae Haaniidae Rivetinidae Amelidae Eremiaphilidae Toxoderidae Hoplocoryphidae Miomantidae Galinthiadidae Empusidae Hymenopodidae Dactylopterygidae Deroplatyidae Mantidae Mantises are thought to have evolved from cockroach-like ancestors. Some of the earliest confidently identified mantis fossils date to the Early Cretaceous, although the Jurassic taxon Lovec was identified in 2024 from the Karabastau Formation. Fossils of the group are rare: by 2022, 37 fossil species are known. Fossil mantises, including one from Japan with spines on the front legs as in modern mantises, have been found in Cretaceous amber. Most fossils in amber are nymphs; compression fossils (in rock) include adults. Fossil mantises from the Crato Formation in Brazil include the 10 mm (0.39 in) long Santanmantis axelrodi, described in 2003; as in modern mantises, the front legs were adapted for catching prey. Well-preserved specimens yield details as small as 5 μm through X-ray computed tomography. Extinct families and genera include: Because of the superficially similar raptorial forelegs, mantidflies may be confused with mantises, though they are unrelated. Their similarity is an example of convergent evolution; mantidflies do not have tegmina (leathery forewings) like mantises, their antennae are shorter and less thread-like, and the raptorial tibia is more muscular than that of a similar-sized mantis and bends back farther in preparation for shooting out to grasp prey. Biology Mantises have large, triangular heads with a beak-like snout and mandibles. They have two bulbous compound eyes, three small simple eyes, and a pair of antennae. The articulation of the neck is also remarkably flexible; some species of mantis can rotate their heads nearly 180°. The mantis thorax consists of a prothorax, a mesothorax, and a metathorax. In all species apart from the genus Mantoida, the prothorax, which bears the head and forelegs, is much longer than the other two thoracic segments. The prothorax is also flexibly articulated, allowing for a wide range of movements of the head and fore limbs while the remainder of the body remains more or less immobile. Mantises also are unique to the Dictyoptera in that they have tympanate hearing, with two tympana in an auditory chamber in their metathorax. Most mantises can only hear ultrasound. Mantises have two spiked, grasping forelegs ("raptorial legs") in which prey items are caught and held securely. In most insect legs, including the posterior four legs of a mantis, the coxa and trochanter combine as an inconspicuous base of the leg; in the raptorial legs, however, the coxa and trochanter combine to form a segment about as long as the femur, which is a spiky part of the grasping apparatus (see illustration). Located at the base of the femur is a set of discoidal spines, usually four in number, but ranging from none to as many as five depending on the species. These spines are preceded by a number of tooth-like tubercles, which, along with a similar series of tubercles along the tibia and the apical claw near its tip, give the foreleg of the mantis its grasp on its prey. The foreleg ends in a delicate tarsus used as a walking appendage, made of four or five segments and ending in a two-toed claw with no arolium. Mantises can be loosely categorized as being macropterous (long-winged), brachypterous (short-winged), micropterous (vestigial-winged), or apterous (wingless). If not wingless, a mantis has two sets of wings: the outer wings, or tegmina, are usually narrow and leathery. They function as camouflage and as a shield for the hindwings, which are clearer and more delicate. The abdomen of all mantises consists of 10 tergites, with a corresponding set of nine sternites visible in males and seven visible in females. The abdomen tends to be slimmer in males than females, but ends in a pair of cerci in both sexes. Mantises have stereo vision. They locate their prey by sight; their compound eyes contain up to 10,000 ommatidia. A small area at the front called the fovea has greater visual acuity than the rest of the eye, and can produce the high resolution necessary to examine potential prey. The peripheral ommatidia are concerned with perceiving motion; when a moving object is noticed, the head is rapidly rotated to bring the object into the visual field of the fovea. Further motions of the prey are then tracked by movements of the mantis's head so as to keep the image centered on the fovea. The use of stereoscopic vision differs from humans or primates because they specifically utilize this vision for capturing and spotting prey. The eyes are widely spaced and laterally situated, affording a wide binocular field of vision and precise stereoscopic vision at close range. The dark spot on each eye that moves as it rotates its head is a pseudopupil. This occurs because the ommatidia that are viewed "head-on" absorb the incident light, while those to the side reflect it. As their hunting relies heavily on vision, mantises are primarily diurnal. Many species, however, fly at night and may then be attracted to artificial lights. They have good night vision. Male mantises in the family Liturgusidae are more frequently collected at night, suggesting greater nocturnal activity or attraction to light sources. This pattern likely extends to other mantis families, where males are also more commonly observed during nighttime surveys. Nocturnal flight is especially important to males in locating less-mobile females by detecting their pheromones. Flying at night exposes mantises to fewer bird predators than diurnal flight would. Many mantises also have an auditory thoracic organ that helps them avoid bats by detecting their echolocation calls and responding evasively. Mantises are generalist predators of arthropods. The majority of mantises are ambush predators that only feed upon live prey within their reach. They either camouflage themselves and remain stationary, waiting for prey to approach, or stalk their prey with slow, stealthy movements. Larger mantises sometimes eat smaller individuals of their own species, as well as small vertebrates such as lizards, frogs, fish, and particularly small birds. Most mantises stalk tempting prey if it strays close enough, and will go further when they are especially hungry. Once within reach, mantises strike rapidly to grasp the prey with their spiked raptorial forelegs. Some ground and bark species pursue their prey in a more active way. For example, members of a few genera such as the ground mantises Entella, Ligaria, and Ligariella run over dry ground seeking prey, much as tiger beetles do. Some mantis species such as Euantissa pulchra can discriminate between different types of prey, and approach spiders mimicking non-aggressive ant species much more than spiders that mimick aggressive ant species. The foregut of some species extends the whole length of the insect and can be used to store prey for digestion later. This may be advantageous in an insect that feeds intermittently. Chinese mantises live longer, grow faster, and produce more young when they are able to eat pollen. Mantises are preyed on by vertebrates such as frogs, lizards, and birds, and by invertebrates such as spiders, large species of hornets, and ants. Some hunting wasps, such as some species of Tachytes, also paralyze some species of mantis to feed their young. Generally, mantises protect themselves by camouflage, most species being cryptically colored to resemble leaves or other backgrounds, both to avoid predators and to better snare their prey. Those that live on uniformly colored surfaces such as bare earth or tree bark are dorsoventrally flattened so as to eliminate shadows that might reveal their presence. The species from different families called flower mantises are aggressive mimics: they resemble flowers convincingly enough to attract prey that come to collect pollen and nectar. Some species in Africa and Australia are able to turn black after a molt towards the end of the dry season; at this time of year, bush fires occur and this coloration enables them to blend in with the fire-ravaged landscape (fire melanism). When directly threatened, many mantis species stand tall and spread their forelegs, with their wings fanning out wide. The fanning of the wings makes the mantis seem larger and more threatening, with some species enhancing this effect with bright colors and patterns on their hindwings and inner surfaces of their front legs. If harassment persists, a mantis may strike with its forelegs and attempt to pinch or bite. As part of the bluffing (deimatic) threat display, some species may also produce a hissing sound by expelling air from the abdominal spiracles. Mantises lack chemical protection, so their displays are largely bluff. When flying at night, at least some mantises are able to detect the echolocation sounds produced by bats; when the frequency begins to increase rapidly, indicating an approaching bat, they stop flying horizontally and begin a descending spiral toward the safety of the ground, often preceded by an aerial loop or spin. If caught, they may slash captors with their raptorial legs. Mantises, like stick insects, show rocking behavior in which the insect makes rhythmic, repetitive side-to-side movements. Functions proposed for this behavior include the enhancement of crypsis by means of the resemblance to vegetation moving in the wind. However, the repetitive swaying movements may be most important in allowing the insects to discriminate objects from the background by their relative movement, a visual mechanism typical of animals with simpler sight systems. Rocking movements by these generally sedentary insects may replace flying or running as a source of relative motion of objects in the visual field. As ants may be predators of mantises, genera such as Loxomantis, Orthodera, and Statilia, like many other arthropods, avoid attacking them. A variety of arthropods, including some early-instar mantises, exploit this behavior and mimic ants to evade their predators. The mating season in temperate climates typically takes place in autumn, while in tropical areas, mating can occur at any time of the year. To mate following courtship, the male usually leaps onto the female's back, clasping her thorax and wing bases with his forelegs. He then arches his abdomen to deposit and store sperm in a special chamber near the tip of the female's abdomen. The female lays between 10 and 400 eggs, depending on the species. Eggs are typically deposited in a froth mass-produced by glands in the abdomen. This froth hardens, creating a protective capsule, which together with the egg mass is called an ootheca. Depending on the species, the ootheca can be attached to a flat surface, wrapped around a plant, or even deposited in the ground. Despite the versatility and durability of the eggs, they are often preyed on, especially by several species of parasitoid wasps. In a few species, mostly ground and bark mantises in the family Tarachodidae, the mother guards the eggs. The cryptic Tarachodes maurus positions herself on bark with her abdomen covering her egg capsule, ambushing passing prey and moving very little until the eggs hatch. An unusual reproductive strategy is adopted by Brunner's stick mantis from the southern United States: no males have ever been found in this species, and the females breed parthenogenetically. The ability to reproduce by parthenogenesis has been recorded in at least two other species, Sphodromantis viridis and Miomantis sp., although these species usually reproduce sexually. In temperate climates, adults do not survive the winter and the eggs undergo a diapause, hatching in the spring. As in closely related insect groups in the superorder Dictyoptera, mantises go through three life stages: egg, nymph, and adult (mantises are among the hemimetabolous insects). For smaller species, the eggs may hatch in 3–4 weeks as opposed to 4–6 weeks for larger species. The nymphs may be colored differently from the adult, and the early stages are often mimics of ants. A mantis nymph grows bigger as it molts its exoskeleton. Molting can happen five to 10 times before the adult stage is reached, depending on the species. After the final molt, most species have wings, though some species remain wingless or brachypterous ("short-winged"), particularly in the female sex. The lifespan of a mantis depends on the species; smaller ones may live 4–8 weeks, while larger species may live 4–6 months. Sexual cannibalism is common among most predatory species of mantises in captivity. It has sometimes been observed in natural populations, where about a quarter of male–female encounters result in the male being eaten by the female. Around 90% of the predatory species of mantises exhibit sexual cannibalism. Adult males typically outnumber females at first, but their numbers may be fairly equivalent later in the adult stage, possibly because females selectively eat the smaller males. In Tenodera sinensis, 83% of males escape cannibalism after an encounter with a female, but since multiple matings occur, the probability of a male's being eaten increases cumulatively. The female may begin feeding by biting off the male's head (as they do with regular prey), and if mating has begun, the male's movements may become even more vigorous in its delivery of sperm. Early researchers thought that because copulatory movement is controlled by a ganglion in the abdomen, not the head, removal of the male's head was a reproductive strategy by females to enhance fertilization while obtaining sustenance. Later, this behavior appeared to be an artifact of intrusive laboratory observation. Whether the behavior is natural in the field or also the result of distractions caused by the human observer remains controversial. Mantises are highly visual organisms and notice any disturbance in the laboratory or field, such as bright lights or moving scientists. Chinese mantises that had been fed ad libitum (so that they were not hungry) actually displayed elaborate courtship behavior when left undisturbed. The male engages the female in a courtship dance, to change her interest from feeding to mating. Under such circumstances, the female has been known to respond with a defensive deimatic display by flashing the colored eyespots on the inside of her front legs. The reason for sexual cannibalism has been debated; experiments show that females on poor diets are likelier to engage in sexual cannibalism than those on good diets. Some hypothesize that submissive males gain a selective advantage by producing offspring; this is supported by a quantifiable increase in the duration of copulation among males which are cannibalized, in some cases doubling both the duration and the chance of fertilization. This is contrasted by a study where males were seen to approach hungry females with more caution, and were shown to remain mounted on hungry females for a longer time, indicating that males that actively avoid cannibalism may mate with multiple females. The same study also found that hungry females generally attracted fewer males than those that were well fed. The act of dismounting after copulation is dangerous for males, for it is the time that females most frequently cannibalize their mates. An increase in mounting duration appears to indicate that males wait for an opportune time to dismount a hungry female, who would be likely to cannibalize her mate. Experiments have revealed that the sex ratio in an environment determines the male copulatory behavior of Mantis religiosa, which in turn affects the cannibalistic tendencies of the female. This supports the sperm competition hypothesis because the polyandrous treatment recorded the highest copulation duration time and lowest cannibalism. This further suggests that dismounting the female can make males susceptible to cannibalism. Relationship with humans One of the earliest mantis references is in the ancient Chinese dictionary Erya, which gives its attributes in poetry, where it represents courage and fearlessness, and a brief description. A later text, the Jingshi Zhenglei Daguan Bencao [zh] (transl. "Great History of Medical Material Annotated and Arranged by Types, Based upon the Classics and Historical Works") from 1108, gives accurate details of the construction of the egg packages, the development cycle, anatomy, and the function of the antennae. Although mantises are rarely mentioned in Ancient Greek sources, a female mantis in threat posture is accurately illustrated on a series of fifth-century BC silver coins, including didrachms, from Metapontum in Lucania. The 10th-century Byzantine encyclopedia Suda described the insect called "mantis" (μάντις) as a pale green, clumsy, and slow-moving locust, adding that some people observed its movements for the purpose of augury. In addition, the Suda mentions the phrase "arouraia mantis" (Ἀρουραία μάντις), explaining that it was a proverbial expression used to mock people who were sluggish and ineffectual but still treated as if they had wisdom or insight. He translates Zenobius 2.94 with the words seriphos (maybe a mantis) and graus, an old woman, implying a thin, dried-up stick of a body. Mantises are a common motif in Luna Polychrome ceramics of pre-Columbian Nicaragua, and are believed to represent a deity or spirit called "Madre Culebra". Western descriptions of the biology and morphology of the mantises became more accurate in the 18th century. Roesel von Rosenhof illustrated and described mantises and their cannibalistic behavior in the Insekten-Belustigungen (Insect Entertainments). In the early 1900s, people in the United States Ozarks region referred to them as Devil's horses. Aldous Huxley made philosophical observations about the nature of death while two mantises mated in the sight of two characters in his 1962 novel Island (the species was Gongylus gongylodes). The naturalist Gerald Durrell's humorously autobiographical 1956 book My Family and Other Animals includes a four-page account of an almost evenly matched battle between a mantis and a gecko. Shortly before the fatal dénouement, Durrell narrates: he [Geronimo the gecko] crashed into the mantis and made her reel, and grabbed the underside of her thorax in his jaws. Cicely [the mantis] retaliated by snapping both her front legs shut on Geronimo's hindlegs. They rustled and staggered across the ceiling and down the wall, each seeking to gain some advantage. M. C. Escher's woodcut Dream depicts a human-sized mantis standing on a sleeping bishop. A cultural trope imagines the female mantis as a femme fatale. The idea is propagated in cartoons by Cable, Guy and Rodd, LeLievre, T. McCracken, and Mark Parisi, among others. It ends Isabella Rossellini's short film about the life of a praying mantis in her 2008 Green Porno season for the Sundance Channel. The Deadly Mantis is a 1957 American science fiction monster film, with a giant mantis threatening mankind. Two martial arts separately developed in China have movements and fighting strategies based on those of the mantis. As one of these arts was developed in northern China, and the other in southern parts of the country, the arts are today referred to (both in English and Chinese) as 'Northern Praying Mantis' and 'Southern Praying Mantis'. Both are very popular in China, and have also been exported to the West in recent decades. According to local beliefs in Africa, this insect brings good luck. The mantis was revered by the southern African Khoi and San in whose cultures man and nature were intertwined; for its praying posture, the mantis was even named Hottentotsgot ("god of the Hottentots") in the Afrikaans language that had developed among the first European settlers. However, at least for the San, the mantis was only one of the manifestations of a trickster-deity, ǀKaggen, who could assume many other forms, such as a snake, hare or vulture. Several ancient civilizations did consider the insect to have supernatural powers; for the Greeks, it had the ability to show lost travelers the way home; in the Ancient Egyptian Book of the Dead, the "bird-fly" is a minor god that leads the souls of the dead to the underworld; in a list of 9th-century BC Nineveh grasshoppers (buru), the mantis is named necromancer (buru-enmeli) and soothsayer (buru-enmeli-ashaga). Some pre-Columbian cultures in western Nicaragua have preserved oral traditions of the mantis as "Madre Culebra", a powerful predator and symbol of female symbolic authority. Mantises are among the insects most widely kept as pets. Because the lifespan of a mantis is only about a year, people who want to keep mantises often breed them. In 2013 at least 31 species were kept and bred in the United Kingdom, the Netherlands, and the United States. In 1996 at least 50 species were known to be kept in captivity by members of the Mantis Study Group. Naturally occurring mantis populations provide plant pest control. Gardeners who prefer to avoid pesticides may encourage mantises in the hope of controlling insect pests. However, mantises do not have key attributes of biological pest control agents; they do not specialize in a single pest insect, and do not multiply rapidly in response to an increase in such a prey species, but are general predators. They therefore have "negligible value" in biological control. Two species, the Chinese mantis and the European mantis, were deliberately introduced to North America in the hope that they would serve as pest controls for agriculture; they have spread widely in both the United States and Canada. In 2016, the Association for the Advancement of Artificial Intelligence had produced a prototype robot inspired by the forelegs of the praying mantis, with front legs that allow the robot to walk, climb steps, and grasp objects. The multi-jointed leg provides dexterity via a rotatable joint. Future models may include a more spiked foreleg to improve the grip and ability to support more weight. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/U.B._Funkeys] | [TOKENS: 1040]
Contents U.B. Funkeys U.B. Funkeys is a toys-to-life personal computer game and collectible figure set created by Mattel. It was created in 2007 until discontinuation of the toys in the United States in 2010. Play consisted of a personal computer game that worked together with collectible figures that represent characters in the game. There are over 45 different "species" of Funkeys. Most Funkeys come in three different types of styles which are normal, rare, and very rare. Gameplay involves players placing figures in the hub, (A special USB unit shaped to look like a larger version of the small figures) which in turn appear in the game. Each figure, when connected to the hub, allows players to unlock new areas of the game. The hub is purchased in a starter pack with two to four of the collectible figures. It is required to play the game. It began in August, 2007 and ended in January, 2010. The product was exhibited by Mattel in February 2007 at the American International Toy Fair and designed by Radica Games. The game software was developed by Arkadium. Description Funkeys are characters that inhabit a virtual world called Terrapinia. Players navigate a number of zones and portals where they play games to earn coins. With their coins they can buy items to decorate their homes, referred to as "cribs" in the game. Users progress through the game as they collect different figures. Each "tribe" is able to access different areas, games and items. Most figures have two sets of alternate colors, and using these "Rare" or "Very Rare" Funkeys gives the player access to more items inside of their respective shops. There are many portals to go through: Kelpy Basin, Magma Gorge, Laputta Station, Funkiki Island, Daydream Oasis, Nightmare Rift, Royalton Raceway, Hidden Realm, and Paradox Green. In order to use a portal, the player had to use a Funkey with a game room in the given location. Regardless of tribe, any Funkey can return to Funkeystown.* In every zone, there is an enemy character who appears if the player stays outside for too long. Encountering these characters will start a short minigame where the player can win or lose coins. Throughout the game, the player hears of Master Lox, the main antagonist of the series. He locked each of the portals and game rooms, restricting access only for particular Funkeys. A series of Wendy's Kid's meal toys included a Bobblehead, a backpack clip, a 3D board game and 2 CDs that have prototypes of the game. Sets The series spawned many various sets, which were available throughout the series' lifespan. These were single Funkeys, starter packs, adventure packs, Multiplayer sets, chat sets, and limited edition packs. A set of Funkeys based on the Speed Racer franchise was also released, however, a second series of characters was cancelled following poor performance of the live-action film. Starter packs contain a white hub, installation disk, and instruction booklet. In general, each contained two to four funkeys relating to a particular world. The white hub design would change in some packs to reflect the new worlds. For instance, hubs made during the Dream State run have a chest with purple wisps on them. During the series' lifespan, there were a number of adventure packs released. These contained several Funkeys from the same world that were of the same rarity. (i.e. all four would be Normal/Rare/Very Rare) Sometimes these Funkeys would be bundled together in starter packs, as well. Several games from the Dream States and on offer multiplayer functionalities. These Funkeys would be bundled with one extra Funkey as a minor bonus (like adventure packs, both Funkeys are of the same rarity). Similarly, Chat Funkeys (who offer a chat room in place of a game) are bundled in with a bonus Funkey. Since all chat Funkeys are of Normal rarity, the bundled Funkey is, as well. Limited edition packs were created that came with Funkeys that the player could not normally use. This included the Funkeystown adventure pack, and the Funkiki Island pack. A Dream State pack was also incoming, but never officially released. The Funkeystown pack came with a Henchmen figure, a Master Lox figure, and the Mayor Sayso figure. Lox and the Henchman offered access to the Villain's Den, a shop-less game room in Funkeystown with a coin-related game. Mayor Sayso was able to access any game room in the original game, but cannot play any of the games. The Funkiki Island pack came with Jerry, a Funkiki native, and the Pineapple King, along with a Normal Sol figure. In a similar fashion to the Funkeystown Adventure Pack, Jerry can access any game room but cannot play games, and the Native and Pineapple King can access the Funkiki Native Outpost.[citation needed] References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/CPU_cache] | [TOKENS: 12381]
Contents CPU cache A CPU cache is a hardware cache used by the central processing unit (CPU) of a computer to reduce the average cost (time or energy) to access data from the main memory. A cache is a smaller, faster memory, located closer to a processor core, which stores copies of the data from frequently used main memory locations, avoiding the need to always refer to main memory which may be tens to hundreds of times slower to access. Cache memory is typically implemented with static random-access memory (SRAM), which requires multiple transistors to store a single bit. This makes it expensive in terms of the area it takes up, and in modern CPUs the cache is typically the largest part by chip area. The size of the cache needs to be balanced with the general desire for smaller chips which cost less. Some modern designs implement some or all of their cache using the physically smaller eDRAM, which is slower to use than SRAM but allows larger amounts of cache for any given amount of chip area. Most CPUs have a hierarchy of multiple cache levels (L1, L2, often L3, and rarely even L4), with separate instruction-specific (I-cache) and data-specific (D-cache) caches at level 1. The different levels are implemented in different areas of the chip; L1 is located as close to a CPU core as possible and thus offers the highest speed due to short signal paths, but requires careful design. L2 caches are physically separate from the CPU and operate slower, but place fewer demands on the chip designer and can be made much larger without impacting the CPU design. L3 caches are generally shared among multiple CPU cores. Other types of caches exist (that are not counted towards the "cache size" of the most important caches mentioned above), such as the translation lookaside buffer (TLB) which is part of the memory management unit (MMU) which most CPUs have. Input/output sections also often contain data buffers that serve a similar purpose. Overview To access data in main memory, a multi-step process is used and each step introduces a delay. For instance, to read a value from memory in a simple computer system the CPU first selects the address to be accessed by expressing it on the address bus and waiting a fixed time to allow the value to settle. The memory device with that value, normally implemented in DRAM, holds that value in a very low-energy form that is not powerful enough to be read directly by the CPU. Instead, it has to copy that value from storage into a small buffer which is connected to the data bus. It then waits a certain time to allow this value to settle before reading the value from the data bus. By locating the memory physically closer to the CPU the time needed for the buses to settle is reduced, and by replacing the DRAM with SRAM, which hold the value in a form that does not require amplification to be read, the delay within the memory itself is eliminated. This makes the cache much faster both to respond and to read or write. SRAM, however, requires anywhere from four to six transistors to hold a single bit, depending on the type, whereas DRAM generally uses one transistor and one capacitor per bit, which makes it able to store much more data for any given chip area. Implementing some memory in a faster format can lead to large performance improvements. When trying to read from or write to a location in the memory, the processor checks whether the data from that location is already in the cache. If so, the processor will read from or write to the cache instead of the much slower main memory. Many modern desktop, server, and industrial CPUs have at least three independent levels of caches (L1, L2 and L3) and different types of caches: History Early examples of CPU caches include the Atlas 2 and the IBM System/360 Model 85 in the 1960s. The first CPUs that used a cache had only one level of cache; unlike later level 1 cache, it was not split into L1d (for data) and L1i (for instructions). Split L1 cache started in 1976 with the IBM 801 CPU, became mainstream in the late 1980s, and in 1997 entered the embedded CPU market with the ARMv5TE. As of 2015, even sub-dollar SoCs split the L1 cache. They also have L2 caches and, for larger processors, L3 caches as well. The L2 cache is usually not split, and acts as a common repository for the already split L1 cache. Every core of a multi-core processor has a dedicated L1 cache and is usually not shared between the cores. The L2 cache, and lower-level caches, may be shared between the cores. L4 cache is currently uncommon, and is generally dynamic random-access memory (DRAM) on a separate die or chip, rather than static random-access memory (SRAM). An exception to this is when eDRAM is used for all levels of cache, down to L1. Historically L1 was also on a separate die, however bigger die sizes have allowed integration of it as well as other cache levels, with the possible exception of the last level. Each extra level of cache tends to be smaller and faster than the lower levels. Caches (like for RAM historically) have generally been sized in powers of: 2, 4, 8, 16 etc. KiB; when up to MiB sizes (i.e. for larger non-L1), very early on the pattern broke down, to allow for larger caches without being forced into the doubling-in-size paradigm, with e.g. Intel Core 2 Duo with 3 MiB L2 cache in April 2008. This happened much later for L1 caches, as their size is generally still a small number of KiB. The IBM zEC12 from 2012 is an exception however, to gain unusually large 96 KiB L1 data cache for its time, and e.g. the IBM z13 having a 96 KiB L1 instruction cache (and 128 KiB L1 data cache), and Intel Ice Lake-based processors from 2018, having 48 KiB L1 data cache and 48 KiB L1 instruction cache. In 2020, some Intel Atom CPUs (with up to 24 cores) have (multiple of) 4.5 MiB and 15 MiB cache sizes. Operation Data is transferred between memory and cache in blocks of fixed size, called cache lines or cache blocks. When a cache line is copied from memory into the cache, a cache entry is created. The cache entry will include the copied data as well as the requested memory location (called a tag). When the processor needs to read or write a location in memory, it first checks for a corresponding entry in the cache. The cache checks for the contents of the requested memory location in any cache lines that might contain that address. If the processor finds that the memory location is in the cache, a cache hit has occurred. However, if the processor does not find the memory location in the cache, a cache miss has occurred. In the case of a cache hit, the processor immediately reads or writes the data in the cache line. For a cache miss, the cache allocates a new entry and copies data from main memory, then the request is fulfilled from the contents of the cache. To make room for the new entry on a cache miss, the cache may have to evict one of the existing entries. The heuristic it uses to choose the entry to evict is called the replacement policy. The fundamental problem with any replacement policy is that it must predict which existing cache entry is least likely to be used in the future. Predicting the future is generally difficult, so there is no perfect method to choose among the variety of replacement policies available. One popular replacement policy, least-recently used (LRU), replaces the least recently accessed entry. Marking some memory ranges as non-cacheable can improve performance, by avoiding caching of memory regions that are rarely re-accessed. This avoids the overhead of loading something into the cache without having any reuse. Cache entries may also be disabled or locked depending on the context. If data are written to the cache, at some point they must also be written to main memory; the timing of this write is known as the write policy. In a write-through cache, every write to the cache causes a write to main memory. Alternatively, in a write-back or copy-back cache, writes are not immediately mirrored to the main memory, with locations been written over being marked as dirty, being written back to the main memory only when they are evicted from the cache. For this reason, a read miss in a write-back cache may sometimes require two memory accesses to service: one to first write the dirty location to main memory, and then another to read the new location from memory. Also, a write to a main memory location that is not yet mapped in a write-back cache may evict an already dirty location, thereby freeing that cache space for the new memory location. There are intermediate policies as well. The cache may be write-through, but the writes may be held in a store data queue temporarily, usually so multiple stores can be processed together (which can reduce bus turnarounds and improve bus utilization). Cached data from the main memory may be changed by other entities (e.g., peripherals using direct memory access (DMA) or another core in a multi-core processor), in which case the copy in the cache may become out-of-date or stale. Alternatively, when a CPU in a multiprocessor system updates data in the cache, copies of data in caches associated with other CPUs become stale. Communication protocols between the cache managers that keep the data consistent are known as cache coherence protocols. Cache performance measurement has become important in recent times where the speed gap between the memory performance and the processor performance is increasing exponentially. The cache was introduced to reduce this speed gap. Thus knowing how well the cache is able to bridge the gap in the speed of processor and memory becomes important, especially in high-performance systems. The cache hit rate and the cache miss rate play an important role in determining this performance. To improve the cache performance, reducing the miss rate becomes one of the necessary steps among other steps. Decreasing the access time to the cache also gives a boost to its performance and helps with optimization. The time taken to fetch one cache line from memory (read latency due to a cache miss) matters because the CPU will run out of work while waiting for the cache line. When a CPU reaches this state, it is called a stall. As CPUs become faster compared to main memory, stalls due to cache misses displace more potential computation; modern CPUs can execute hundreds of instructions in the time taken to fetch a single cache line from main memory. Various techniques have been employed to keep the CPU busy during this time, including out-of-order execution in which the CPU attempts to execute independent instructions after the instruction that is waiting for the cache miss data. Another technology, used by many processors, is simultaneous multithreading (SMT), which allows an alternate thread to use the CPU core while the first thread waits for required CPU resources to become available. Associativity The placement policy decides where in the cache a copy of a particular entry of main memory will go. If the placement policy is free to choose any entry in the cache to hold the copy, the cache is called fully associative. At the other extreme, if each entry in the main memory can go in just one place in the cache, the cache is direct-mapped. Many caches implement a compromise in which each entry in the main memory can go to any one of N places in the cache, and are described as N-way set associative. For example, the level-1 data cache in an AMD Athlon is two-way set associative, which means that any particular location in main memory can be cached in either of two locations in the level-1 data cache. Choosing the right value of associativity involves a trade-off. If there are ten places to which the placement policy could have mapped a memory location, then to check if that location is in the cache, ten cache entries must be searched. Checking more places takes more power and chip area, and potentially more time. On the other hand, caches with more associativity suffer fewer misses (see conflict misses), so that the CPU wastes less time reading from the slow main memory. The general guideline is that doubling the associativity, from direct mapped to two-way, or from two-way to four-way, has about the same effect on raising the hit rate as doubling the cache size. However, increasing associativity more than four does not improve hit rate as much, and are generally done for other reasons (see virtual aliasing). Some CPUs can dynamically reduce the associativity of their caches in low-power states, which acts as a power-saving measure. In order of worse but simple to better but complex: In this cache organization, each location in the main memory can go in only one entry in the cache. Therefore, a direct-mapped cache can also be called a "one-way set associative" cache. It does not have a placement policy as such, since there is no choice of which cache entry's contents to evict. This means that if two locations map to the same entry, they may continually knock each other out. Although simpler, a direct-mapped cache needs to be much larger than an associative one to give comparable performance, and it is more unpredictable. Let x be block number in cache, y be block number of memory, and n be number of blocks in cache, then mapping is done with the help of the equation x = y mod n. If each location in the main memory can be cached in either of two locations in the cache, one logical question is: which one of the two? The simplest and most commonly used scheme, shown in the right-hand diagram above, is to use the least significant bits of the memory location's index as the index for the cache memory, and to have two entries for each index. One benefit of this scheme is that the tags stored in the cache do not have to include that part of the main memory address which is implied by the cache memory's index. Since the cache tags have fewer bits, they require fewer transistors, take less space on the processor circuit board or on the microprocessor chip, and can be read and compared faster. Also LRU algorithm is especially simple since only one bit needs to be stored for each pair. One of the advantages of a direct-mapped cache is that it allows simple and fast speculation. Once the address has been computed, the one cache index which might have a copy of that location in memory is known. That cache entry can be read, and the processor can continue to work with that data before it finishes checking that the tag actually matches the requested address. The idea of having the processor use the cached data before the tag match completes can be applied to associative caches as well. A subset of the tag, called a hint, can be used to pick just one of the possible cache entries mapping to the requested address. The entry selected by the hint can then be used in parallel with checking the full tag. The hint technique works best when used in the context of address translation, as explained below. Other schemes have been suggested, such as the skewed cache, where the index for way 0 is direct, as above, but the index for way 1 is formed with a hash function. A good hash function has the property that addresses which conflict with the direct mapping tend not to conflict when mapped with the hash function, and so it is less likely that a program will suffer from an unexpectedly large number of conflict misses due to a pathological access pattern. The downside is extra latency from computing the hash function. Additionally, when it comes time to load a new line and evict an old line, it may be difficult to determine which existing line was least recently used, because the new line conflicts with data at different indexes in each way; LRU tracking for non-skewed caches is usually done on a per-set basis. Nevertheless, skewed-associative caches have major advantages over conventional set-associative ones. A true set-associative cache tests all the possible ways simultaneously, using something like a content-addressable memory. A pseudo-associative cache tests each possible way one at a time. A hash-rehash cache and a column-associative cache are examples of a pseudo-associative cache. In the common case of finding a hit in the first way tested, a pseudo-associative cache is as fast as a direct-mapped cache, but it has a much lower conflict miss rate than a direct-mapped cache, closer to the miss rate of a fully associative cache. Comparing with a direct-mapped cache, a set associative cache has a reduced number of bits for its cache set index that maps to a cache set, where multiple ways or blocks stays, such as 2 blocks for a 2-way set associative cache and 4 blocks for a 4-way set associative cache. Comparing with a direct mapped cache, the unused cache index bits become a part of the tag bits. For example, a 2-way set associative cache contributes 1 bit to the tag and a 4-way set associative cache contributes 2 bits to the tag. The basic idea of the multicolumn cache is to use the set index to map to a cache set as a conventional set associative cache does, and to use the added tag bits to index a way in the set. For example, in a 4-way set associative cache, the two bits are used to index way 00, way 01, way 10, and way 11, respectively. This double cache indexing is called a "major location mapping", and its latency is equivalent to a direct-mapped access. Extensive experiments in multicolumn cache design shows that the hit ratio to major locations is as high as 90%. If cache mapping conflicts with a cache block in the major location, the existing cache block will be moved to another cache way in the same set, which is called "selected location". Because the newly indexed cache block is a most recently used (MRU) block, it is placed in the major location in multicolumn cache with a consideration of temporal locality. Since multicolumn cache is designed for a cache with a high associativity, the number of ways in each set is high; thus, it is easy find a selected location in the set. A selected location index by an additional hardware is maintained for the major location in a cache block.[citation needed] Multicolumn cache remains a high hit ratio due to its high associativity, and has a comparable low latency to a direct-mapped cache due to its high percentage of hits in major locations. The concepts of major locations and selected locations in multicolumn cache have been used in several cache designs in ARM Cortex R chip, Intel's way-predicting cache memory, IBM's reconfigurable multi-way associative cache memory and Oracle's dynamic cache replacement way selection based on address tab bits. Cache entry structure Cache row entries usually have the following structure: The data block (cache line) contains the actual data fetched from the main memory. The tag contains (part of) the address of the actual data fetched from the main memory. The flag bits are discussed below. The "size" of the cache is the amount of main memory data it can hold. This size can be calculated as the number of bytes stored in each data block times the number of blocks stored in the cache. (The tag, flag and error correction code bits are not included in the size, although they do affect the physical area of a cache.) An effective memory address which goes along with the cache line (memory block) is split (MSB to LSB) into the tag, the index and the block offset. The index describes which cache set that the data has been put in. The index length is ⌈ log 2 ⁡ ( s ) ⌉ {\displaystyle \lceil \log _{2}(s)\rceil } bits for s cache sets. The block offset specifies the desired data within the stored data block within the cache row. Typically the effective address is in bytes, so the block offset length is ⌈ log 2 ⁡ ( b ) ⌉ {\displaystyle \lceil \log _{2}(b)\rceil } bits, where b is the number of bytes per data block. The tag contains the most significant bits of the address, which are checked against all rows in the current set (the set has been retrieved by index) to see if this set contains the requested address. If it does, a cache hit occurs. The tag length in bits is as follows: Some authors refer to the block offset as simply the "offset" or the "displacement". The original Pentium 4 processor had a four-way set associative L1 data cache of 8 KiB in size, with 64-byte cache blocks. Hence, there are 8 KiB / 64 = 128 cache blocks. The number of sets is equal to the number of cache blocks divided by the number of ways of associativity, what leads to 128 / 4 = 32 sets, and hence 25 = 32 different indices. There are 26 = 64 possible offsets. Since the CPU address is 32 bits wide, this implies 32 − 5 − 6 = 21 bits for the tag field. The original Pentium 4 processor also had an eight-way set associative L2 integrated cache 256 KiB in size, with 128-byte cache blocks. This implies 32 − 8 − 7 = 17 bits for the tag field. An instruction cache requires only one flag bit per cache row entry: a valid bit. The valid bit indicates whether or not a cache block has been loaded with valid data. On power-up, the hardware sets all the valid bits in all the caches to "invalid". Some systems also set a valid bit to "invalid" at other times, such as when multi-master bus snooping hardware in the cache of one processor hears an address broadcast from some other processor, and realizes that certain data blocks in the local cache are now stale and should be marked invalid. A data cache typically requires two flag bits per cache line – a valid bit and a dirty bit. Having a dirty bit set indicates that the associated cache line has been changed since it was read from main memory ("dirty"), meaning that the processor has written data to that line and the new value has not propagated all the way to main memory. Cache miss A cache miss is a failed attempt to read or write a piece of data in the cache, which results in a main memory access with much longer latency. There are three kinds of cache misses: instruction read miss, data read miss, and data write miss. Cache read misses from an instruction cache generally cause the largest delay, because the processor, or at least the thread of execution, has to wait (stall) until the instruction is fetched from main memory. Cache read misses from a data cache usually cause a smaller delay, because instructions not dependent on the cache read can be issued and continue execution until the data are returned from main memory, and the dependent instructions can resume execution. Cache write misses to a data cache generally cause the shortest delay, because the write can be queued and there are few limitations on the execution of subsequent instructions; the processor can continue until the queue is full. For a detailed introduction to the types of misses, see cache performance measurement and metric. Address translation Most general purpose CPUs implement some form of virtual memory. To summarize, either each program running on the machine sees its own simplified address space, which contains code and data for that program only, or all programs run in a common virtual address space. A program executes by calculating, comparing, reading and writing to addresses of its virtual address space, rather than addresses of physical address space, making programs simpler and thus easier to write. Virtual memory requires the processor to translate virtual addresses generated by the program into physical addresses in main memory. The portion of the processor that does this translation is known as the memory management unit (MMU). The fast path through the MMU can perform those translations stored in the translation lookaside buffer (TLB), which is a cache of mappings from the operating system's page table, segment table, or both. For the purposes of the present discussion, there are three important features of address translation: One early virtual memory system, the IBM M44/44X, required an access to a mapping table held in core memory before every programmed access to main memory.[NB 1] With no caches, and with the mapping table memory running at the same speed as main memory this effectively cut the speed of memory access in half. Two early machines that used a page table in main memory for mapping, the IBM System/360 Model 67 and the GE 645, both had a small associative memory as a cache for accesses to the in-memory page table. Both machines predated the first machine with a cache for main memory, the IBM System/360 Model 85, so the first hardware cache used in a computer system was not a data or instruction cache, but rather a TLB. Caches can be divided into four types, based on whether the index or tag correspond to physical or virtual addresses: The speed of this recurrence (the load latency) is crucial to CPU performance, and so most modern level-1 caches are virtually indexed, which at least allows the MMU's TLB lookup to proceed in parallel with fetching the data from the cache RAM. But virtual indexing is not the best choice for all cache levels. The cost of dealing with virtual aliases grows with cache size, and as a result most level-2 and larger caches are physically indexed. Caches have historically used both virtual and physical addresses for the cache tags, although virtual tagging is now uncommon. If the TLB lookup can finish before the cache RAM lookup, then the physical address is available in time for tag compare, and there is no need for virtual tagging. Large caches, then, tend to be physically tagged, and only small, very low latency caches are virtually tagged. In recent general-purpose CPUs, virtual tagging has been superseded by virtual hints, as described below. A cache that relies on virtual indexing and tagging becomes inconsistent after the same virtual address is mapped into different physical addresses (homonym), which can be solved by using physical address for tagging, or by storing the address space identifier in the cache line. However, the latter approach does not help against the synonym problem, in which several cache lines end up storing data for the same physical address. Writing to such locations may update only one location in the cache, leaving the others with inconsistent data. This issue may be solved by using non-overlapping memory layouts for different address spaces, or otherwise the cache (or a part of it) must be flushed when the mapping changes. The great advantage of virtual tags is that, for associative caches, they allow the tag match to proceed before the virtual to physical translation is done. However, coherence probes and evictions present a physical address for action. The hardware must have some means of converting the physical addresses into a cache index, generally by storing physical tags as well as virtual tags. For comparison, a physically tagged cache does not need to keep virtual tags, which is simpler. When a virtual to physical mapping is deleted from the TLB, cache entries with those virtual addresses will have to be flushed somehow. Alternatively, if cache entries are allowed on pages not mapped by the TLB, then those entries will have to be flushed when the access rights on those pages are changed in the page table. It is also possible for the operating system to ensure that no virtual aliases are simultaneously resident in the cache. The operating system makes this guarantee by enforcing page coloring, which is described below. Some early RISC processors (SPARC, RS/6000) took this approach. It has not been used recently, as the hardware cost of detecting and evicting virtual aliases has fallen and the software complexity and performance penalty of perfect page coloring has risen. It can be useful to distinguish the two functions of tags in an associative cache: they are used to determine which way of the entry set to select, and they are used to determine if the cache hit or missed. The second function must always be correct, but it is permissible for the first function to guess, and get the wrong answer occasionally. Some processors (e.g. early SPARCs) have caches with both virtual and physical tags. The virtual tags are used for way selection, and the physical tags are used for determining hit or miss. This kind of cache enjoys the latency advantage of a virtually tagged cache, and the simple software interface of a physically tagged cache. It bears the added cost of duplicated tags, however. Also, during miss processing, the alternate ways of the cache line indexed have to be probed for virtual aliases and any matches evicted. The extra area (and some latency) can be mitigated by keeping virtual hints with each cache entry instead of virtual tags. These hints are a subset or hash of the virtual tag, and are used for selecting the way of the cache from which to get data and a physical tag. Like a virtually tagged cache, there may be a virtual hint match but physical tag mismatch, in which case the cache entry with the matching hint must be evicted so that cache accesses after the cache fill at this address will have just one hint match. Since virtual hints have fewer bits than virtual tags distinguishing them from one another, a virtually hinted cache suffers more conflict misses than a virtually tagged cache. Perhaps the ultimate reduction of virtual hints can be found in the Pentium 4 (Willamette and Northwood cores). In these processors the virtual hint is effectively two bits, and the cache is four-way set associative. Effectively, the hardware maintains a simple permutation from virtual address to cache index, so that no content-addressable memory (CAM) is necessary to select the right one of the four ways fetched. Large physically indexed caches (usually secondary caches) run into a problem: the operating system rather than the application controls which pages collide with one another in the cache. Differences in page allocation from one program run to the next lead to differences in the cache collision patterns, which can lead to very large differences in program performance. These differences can make it very difficult to get a consistent and repeatable timing for a benchmark run. To understand the problem, consider a CPU with a 1 MiB physically indexed direct-mapped level-2 cache and 4 KiB virtual memory pages. Sequential physical pages map to sequential locations in the cache until after 256 pages the pattern wraps around. We can label each physical page with a color of 0–255 to denote where in the cache it can go. Locations within physical pages with different colors cannot conflict in the cache. Programmers attempting to make maximum use of the cache may arrange their programs' access patterns so that only 1 MiB of data need be cached at any given time, thus avoiding capacity misses. But they should also ensure that the access patterns do not have conflict misses. One way to think about this problem is to divide up the virtual pages the program uses and assign them virtual colors in the same way as physical colors were assigned to physical pages before. Programmers can then arrange the access patterns of their code so that no two pages with the same virtual color are in use at the same time. There is a wide literature on such optimizations (e.g. loop nest optimization), largely coming from the High Performance Computing (HPC) community. The snag is that while all the pages in use at any given moment may have different virtual colors, some may have the same physical colors. In fact, if the operating system assigns physical pages to virtual pages randomly and uniformly, it is extremely likely that some pages will have the same physical color, and then locations from those pages will collide in the cache (this is the birthday paradox). The solution is to have the operating system attempt to assign different physical color pages to different virtual colors, a technique called page coloring. Although the actual mapping from virtual to physical color is irrelevant to system performance, odd mappings are difficult to keep track of and have little benefit, so most approaches to page coloring simply try to keep physical and virtual page colors the same. If the operating system can guarantee that each physical page maps to only one virtual color, then there are no virtual aliases, and the processor can use virtually indexed caches with no need for extra virtual alias probes during miss handling. Alternatively, the OS can flush a page from the cache whenever it changes from one virtual color to another. As mentioned above, this approach was used for some early SPARC and RS/6000 designs. The software page coloring technique has been used to effectively partition the shared Last level Cache (LLC) in multicore processors. This operating system-based LLC management in multicore processors has been adopted by Intel. Cache hierarchy in a modern processor Modern processors have multiple interacting on-chip caches. The operation of a particular cache can be completely specified by the cache size, the cache block size, the number of blocks in a set, the cache set replacement policy, and the cache write policy (write-through or write-back). While all of the cache blocks in a particular cache are the same size and have the same associativity, typically the "higher-level" caches (called Level 1 cache) have a smaller number of blocks, smaller block size, and fewer blocks in a set, but have very short access times. "Lower-level" caches (i.e. Level 2 and below) have progressively larger numbers of blocks, larger block size, more blocks in a set, and relatively longer access times, but are still much faster than main memory. Cache entry replacement policy is determined by a cache algorithm selected to be implemented by the processor designers. In some cases, multiple algorithms are provided for different kinds of work loads. Pipelined CPUs access memory from multiple points in the pipeline: instruction fetch, virtual-to-physical address translation, and data fetch (see classic RISC pipeline). The natural design is to use different physical caches for each of these points, so that no one physical resource has to be scheduled to service two points in the pipeline. Thus the pipeline naturally ends up with at least three separate caches (instruction, TLB, and data), each specialized to its particular role. A victim cache is a cache used to hold blocks evicted from a CPU cache upon replacement. The victim cache lies between the main cache and its refill path, and holds only those blocks of data that were evicted from the main cache. The victim cache is usually fully associative, and is intended to reduce the number of conflict misses. Many commonly used programs do not require an associative mapping for all the accesses. In fact, only a small fraction of the memory accesses of the program require high associativity. The victim cache exploits this property by providing high associativity to only these accesses. It was introduced by Norman Jouppi from DEC in 1990. Intel's Crystalwell variant of its Haswell processors introduced an on-package 128 MiB eDRAM Level 4 cache which serves as a victim cache to the processors' Level 3 cache. In the Skylake microarchitecture the Level 4 cache no longer works as a victim cache. One of the more extreme examples of cache specialization is the trace cache (also known as execution trace cache) found in the Intel Pentium 4 microprocessors. A trace cache is a mechanism for increasing the instruction fetch bandwidth and decreasing power consumption (in the case of the Pentium 4) by storing traces of instructions that have already been fetched and decoded. A trace cache stores instructions either after they have been decoded, or as they are retired. Generally, instructions are added to trace caches in groups representing either individual basic blocks or dynamic instruction traces. The Pentium 4's trace cache stores micro-operations resulting from decoding x86 instructions, providing also the functionality of a micro-operation cache. Having this, the next time an instruction is needed, it does not have to be decoded into micro-ops again.: 63–68 Write Coalescing Cache is a special cache that is part of L2 cache in AMD's Bulldozer microarchitecture. Stores from both L1D caches in the module go through the WCC, where they are buffered and coalesced. The WCC's task is reducing number of writes to the L2 cache. A micro-operation cache (μop cache, uop cache or UC) is a specialized cache that stores micro-operations of decoded instructions, as received directly from the instruction decoders or from the instruction cache. When an instruction needs to be decoded, the μop cache is checked for its decoded form which is re-used if cached; if it is not available, the instruction is decoded and then cached. One of the early works describing μop cache as an alternative frontend for the Intel P6 processor family is the 2001 paper "Micro-Operation Cache: A Power Aware Frontend for Variable Instruction Length ISA". Later, Intel included μop caches in its Sandy Bridge processors and in successive microarchitectures like Ivy Bridge and Haswell.: 121–123 AMD implemented a μop cache in their Zen microarchitecture. Fetching complete pre-decoded instructions eliminates the need to repeatedly decode variable length complex instructions into simpler fixed-length micro-operations, and simplifies the process of predicting, fetching, rotating and aligning fetched instructions. A μop cache effectively offloads the fetch and decode hardware, thus decreasing power consumption and improving the frontend supply of decoded micro-operations. The μop cache also increases performance by more consistently delivering decoded micro-operations to the backend and eliminating various bottlenecks in the CPU's fetch and decode logic. A μop cache has many similarities with a trace cache, although a μop cache is much simpler thus providing better power efficiency; this makes it better suited for implementations on battery-powered devices. The main disadvantage of the trace cache, leading to its power inefficiency, is the hardware complexity required for its heuristic deciding on caching and reusing dynamically created instruction traces. A branch target cache or branch target instruction cache, the name used on ARM microprocessors, is a specialized cache which holds the first few instructions at the destination of a taken branch. This is used by low-powered processors which do not need a normal instruction cache because the memory system is capable of delivering instructions fast enough to satisfy the CPU without one. However, this only applies to consecutive instructions in sequence; it still takes several cycles of latency to restart instruction fetch at a new address, causing a few cycles of pipeline bubble after a control transfer. A branch target cache provides instructions for those few cycles avoiding a delay after most taken branches. This allows full-speed operation with a much smaller cache than a traditional full-time instruction cache. Smart cache is a level 2 or level 3 caching method for multiple execution cores, developed by Intel. Smart Cache shares the actual cache memory between the cores of a multi-core processor. In comparison to a dedicated per-core cache, the overall cache miss rate decreases when cores do not require equal parts of the cache space. Consequently, a single core can use the full level 2 or level 3 cache while the other cores are inactive. Furthermore, the shared cache makes it faster to share memory among different execution cores. Another issue is the fundamental tradeoff between cache latency and hit rate. Larger caches have better hit rates but longer latency. To address this tradeoff, many computers use multiple levels of cache, with small fast caches backed up by larger, slower caches. Multi-level caches generally operate by checking the fastest but smallest cache, level 1 (L1), first; if it hits, the processor proceeds at high speed. If that cache misses, the slower but larger next level cache, level 2 (L2), is checked, and so on, before accessing external memory. As the latency difference between main memory and the fastest cache has become larger, some processors have begun to utilize as many as three levels of on-chip cache. Price-sensitive designs used this to pull the entire cache hierarchy on-chip, but by the 2010s some of the highest-performance designs returned to having large off-chip caches, which is often implemented in eDRAM and mounted on a multi-chip module, as a fourth cache level. In rare cases, such as in the mainframe CPU IBM z15 (2019), all levels down to L1 are implemented by eDRAM, replacing SRAM entirely (for cache, SRAM is still used for registers[citation needed]). Apple's ARM-based Apple silicon series, starting with the A14 and M1, have a 192 KiB L1i cache for each of the high-performance cores, an unusually large amount; however the high-efficiency cores only have 128 KiB. Since then other processors such as Intel's Lunar Lake and Qualcomm's Oryon have also implemented similar L1i cache sizes. The benefits of L3 and L4 caches depend on the application's access patterns. Examples of products incorporating L3 and L4 caches include the following: Finally, at the other end of the memory hierarchy, the CPU register file itself can be considered the smallest, fastest cache in the system, with the special characteristic that it is scheduled in software—typically by a compiler, as it allocates registers to hold values retrieved from main memory for, as an example, loop nest optimization. However, with register renaming most compiler register assignments are reallocated dynamically by hardware at runtime into a register bank, allowing the CPU to break false data dependencies and thus easing pipeline hazards. Register files sometimes also have hierarchy: The Cray-1 (circa 1976) had eight address "A" and eight scalar data "S" registers that were generally usable. There was also a set of 64 address "B" and 64 scalar data "T" registers that took longer to access, but were faster than main memory. The "B" and "T" registers were provided because the Cray-1 did not have a data cache. (The Cray-1 did, however, have an instruction cache.) When considering a chip with multiple cores, there is a question of whether the caches should be shared or local to each core. Implementing shared cache inevitably introduces more wiring and complexity. But then, having one cache per chip, rather than core, greatly reduces the amount of space needed, and thus one can include a larger cache. Typically, sharing the L1 cache is undesirable because the resulting increase in latency would make each core run considerably slower than a single-core chip. However, for the highest-level cache (usually L3, the last one called before accessing memory), having a global cache is desirable for several reasons, such as allowing a single core to use the whole cache, reducing data redundancy by making it possible for different processes or threads to share cached data, and reducing the complexity of utilized cache coherency protocols. For example, an eight-core chip with three levels may include an L1 cache for each core, one intermediate L2 cache for each pair of cores, and one L3 cache shared between all cores. A shared highest-level cache (usually L3, called before accessing memory), is usually referred to as a last-level cache (LLC). Additional techniques are used for increasing the level of parallelism when LLC is shared between multiple cores, including slicing it into multiple pieces which are addressing certain ranges of memory addresses, and can be accessed independently. In a separate cache structure, instructions and data are cached separately, meaning that a cache line is used to cache either instructions or data, but not both; various benefits have been demonstrated with separate data and instruction translation lookaside buffers. In a unified structure, this constraint is not present, and cache lines can be used to cache both instructions and data. Multi-level caches introduce new design decisions. For instance, in some processors, all data in the L1 cache must also be somewhere in the L2 cache. These caches are called strictly inclusive. Other processors (like the AMD Athlon) have exclusive caches: data are guaranteed to be in at most one of the L1 and L2 caches, never in both. Still other processors (like the Intel Pentium II, III, and 4) do not require that data in the L1 cache also reside in the L2 cache, although it may often do so. There is no universally accepted name for this intermediate policy; two common names are "non-exclusive" and "partially-inclusive". The advantage of exclusive caches is that they store more data. This advantage is larger when the exclusive L1 cache is comparable to the L2 cache, and diminishes if the L2 cache is many times larger than the L1 cache. When the L1 misses and the L2 hits on an access, the hitting cache line in the L2 is exchanged with a line in the L1. This exchange is quite a bit more work than just copying a line from L2 to L1, which is what an inclusive cache does. One advantage of strictly inclusive caches is that when external devices or other processors in a multiprocessor system wish to remove a cache line from the processor, they need only have the processor check the L2 cache. In cache hierarchies which do not enforce inclusion, the L1 cache must be checked as well. As a drawback, there is a correlation between the associativities of L1 and L2 caches: if the L2 cache does not have at least as many ways as all L1 caches together, the effective associativity of the L1 caches is restricted. Another disadvantage of inclusive cache is that whenever there is an eviction in L2 cache, the (possibly) corresponding lines in L1 also have to get evicted in order to maintain inclusiveness. This is quite a bit of work, and would result in a higher L1 miss rate. Another advantage of inclusive caches is that the larger cache can use larger cache lines, which reduces the size of the secondary cache tags. (Exclusive caches require both caches to have the same size cache lines, so that cache lines can be swapped on a L1 miss, L2 hit.) If the secondary cache is an order of magnitude larger than the primary, and the cache data are an order of magnitude larger than the cache tags, this tag area saved can be comparable to the incremental area needed to store the L1 cache data in the L2. Scratchpad memory (SPM), also known as scratchpad, scratchpad RAM or local store in computer terminology, is a high-speed internal memory used for temporary storage of calculations, data, and other work in progress. To illustrate both specialization and multi-level caching, here is the cache hierarchy of the K8 core in the AMD Athlon 64 CPU. The K8 has four specialized caches: an instruction cache, an instruction TLB, a data TLB, and a data cache. Each of these caches is specialized: The K8 also has multiple-level caches. There are second-level instruction and data TLBs, which store only PTEs mapping 4 KiB. Both instruction and data caches, and the various TLBs, can fill from the large unified L2 cache. This cache is exclusive to both the L1 instruction and data caches, which means that any 8-byte line can only be in one of the L1 instruction cache, the L1 data cache, or the L2 cache. It is, however, possible for a line in the data cache to have a PTE which is also in one of the TLBs—the operating system is responsible for keeping the TLBs coherent by flushing portions of them when the page tables in memory are updated. The K8 also caches information that is never stored in memory—prediction information. These caches are not shown in the above diagram. As is usual for this class of CPU, the K8 has fairly complex branch prediction, with tables that help predict whether branches are taken and other tables which predict the targets of branches and jumps. Some of this information is associated with instructions, in both the level 1 instruction cache and the unified secondary cache. The K8 uses an interesting trick to store prediction information with instructions in the secondary cache. Lines in the secondary cache are protected from accidental data corruption (e.g. by an alpha particle strike) by either ECC or parity, depending on whether those lines were evicted from the data or instruction primary caches. Since the parity code takes fewer bits than the ECC code, lines from the instruction cache have a few spare bits. These bits are used to cache branch prediction information associated with those instructions. The net result is that the branch predictor has a larger effective history table, and so has better accuracy. Other processors have other kinds of predictors (e.g., the store-to-load bypass predictor in the DEC Alpha 21264). These predictors are caches in that they store information that is costly to compute. Some of the terminology used when discussing predictors is the same as that for caches (one speaks of a hit in a branch predictor), but predictors are not generally thought of as part of the cache hierarchy. The K8 keeps the instruction and data caches coherent in hardware, which means that a store into an instruction closely following the store instruction will change that following instruction. Other processors, like those in the Alpha and MIPS family, have relied on software to keep the instruction cache coherent. Stores are not guaranteed to show up in the instruction stream until a program calls an operating system facility to ensure coherency. In computer engineering, a tag RAM is used to specify which of the possible memory locations is currently stored in a CPU cache. For a simple, direct-mapped design fast SRAM can be used. Higher associative caches usually employ content-addressable memory. Implementation Cache reads are the most common CPU operation that takes more than a single cycle. Program execution time tends to be very sensitive to the latency of a level-1 data cache hit. A great deal of design effort, and often power and silicon area are expended making the caches as fast as possible. The simplest cache is a virtually indexed direct-mapped cache. The virtual address is calculated with an adder, the relevant portion of the address extracted and used to index an SRAM, which returns the loaded data. The data are byte aligned in a byte shifter, and from there are bypassed to the next operation. There is no need for any tag checking in the inner loop – in fact, the tags need not even be read. Later in the pipeline, but before the load instruction is retired, the tag for the loaded data must be read, and checked against the virtual address to make sure there was a cache hit. On a miss, the cache is updated with the requested cache line and the pipeline is restarted. An associative cache is more complicated, because some form of tag must be read to determine which entry of the cache to select. An N-way set-associative level-1 cache usually reads all N possible tags and N data in parallel, and then chooses the data associated with the matching tag. Level-2 caches sometimes save power by reading the tags first, so that only one data element is read from the data SRAM. The adjacent diagram is intended to clarify the manner in which the various fields of the address are used. Address bit 31 is most significant, bit 0 is least significant. The diagram shows the SRAMs, indexing, and multiplexing for a 4 KiB, 2-way set-associative, virtually indexed and virtually tagged cache with 64 byte (B) lines, a 32-bit read width and 32-bit virtual address. Because the cache is 4 KiB and has 64 B lines, there are just 64 lines in the cache, and we read two at a time from a Tag SRAM which has 32 rows, each with a pair of 21 bit tags. Although any function of virtual address bits 31 through 6 could be used to index the tag and data SRAMs, it is simplest to use the least significant bits. Similarly, because the cache is 4 KiB and has a 4 B read path, and reads two ways for each access, the Data SRAM is 512 rows by 8 bytes wide. A more modern cache might be 16 KiB, 4-way set-associative, virtually indexed, virtually hinted, and physically tagged, with 32 B lines, 32-bit read width and 36-bit physical addresses. The read path recurrence for such a cache looks very similar to the path above. Instead of tags, virtual hints are read, and matched against a subset of the virtual address. Later on in the pipeline, the virtual address is translated into a physical address by the TLB, and the physical tag is read (just one, as the virtual hint supplies which way of the cache to read). Finally the physical address is compared to the physical tag to determine if a hit has occurred. Some SPARC designs have improved the speed of their L1 caches by a few gate delays by collapsing the virtual address adder into the SRAM decoders. (See sum-addressed decoder.) The early history of cache technology is closely tied to the invention and use of virtual memory.[citation needed] Because of scarcity and cost of semi-conductor memories, early mainframe computers in the 1960s used a complex hierarchy of physical memory, mapped onto a flat virtual memory space used by programs. The memory technologies would span semi-conductor, magnetic core, drum and disc. Virtual memory seen and used by programs would be flat and caching would be used to fetch data and instructions into the fastest memory ahead of processor access. Extensive studies were done to optimize the cache sizes. Optimal values were found to depend greatly on the programming language used with Algol needing the smallest and Fortran and Cobol needing the largest cache sizes.[disputed – discuss] In the early days of microcomputer technology, memory access was only slightly slower than register access. But since the 1980s the performance gap between processor and memory has been growing. Microprocessors have advanced much faster than memory, especially in terms of their operating frequency, so memory became a performance bottleneck. While it was technically possible to have all the main memory as fast as the CPU, a more economically viable path has been taken: use plenty of low-speed memory, but also introduce a small high-speed cache memory to alleviate the performance gap. This provided an order of magnitude more capacity—for the same price—with only a slightly reduced combined performance. The first documented uses of a TLB were on the GE 645 and the IBM 360/67, both of which used an associative memory as a TLB. The first documented use of an instruction cache was on the CDC 6600. The first documented use of a data cache was on the IBM System/360 Model 85. The 68010, released in 1982, has a "loop mode" which can be considered a tiny and special-case instruction cache that accelerates loops that consist of only two instructions. The 68020, released in 1984, replaced that with a typical instruction cache of 256 bytes, being the first 68k series processor to feature true on-chip cache memory. The 68030, released in 1987, is basically a 68020 core with an additional 256-byte data cache, an on-chip memory management unit (MMU), a process shrink, and added burst mode for the caches. The 68040, released in 1990, has split instruction and data caches of four kilobytes each. The 68060, released in 1994, has the following: 8 KiB data cache (four-way associative), 8 KiB instruction cache (four-way associative), 96-byte FIFO instruction buffer, 256-entry branch cache, and 64-entry address translation cache MMU buffer (four-way associative). As the x86 microprocessors reached clock rates of 20 MHz and above in the 386, small amounts of fast cache memory began to be featured in systems to improve performance. This was because the DRAM used for main memory had significant latency, up to 120 ns, as well as refresh cycles. The cache was constructed from more expensive, but significantly faster, SRAM memory cells, which at the time had latencies around 10–25 ns. The early caches were external to the processor and typically located on the motherboard in the form of eight or nine DIP devices placed in sockets to enable the cache as an optional extra or upgrade feature. Some versions of the Intel 386 processor could support 16 to 256 KiB of external cache. With the 486 processor, an 8 KiB cache was integrated directly into the CPU die. This cache was termed Level 1 or L1 cache to differentiate it from the slower on-motherboard, or Level 2 (L2) cache. These on-motherboard caches were much larger, with the most common size being 256 KiB. There were some system boards that contained sockets for the Intel 485Turbocache daughtercard which had either 64 or 128 Kbyte of cache memory. The popularity of on-motherboard cache continued through the Pentium MMX era but was made obsolete by the introduction of SDRAM and the growing disparity between bus clock rates and CPU clock rates, which caused on-motherboard cache to be only slightly faster than main memory. The next development in cache implementation in the x86 microprocessors began with the Pentium Pro, which brought the secondary cache onto the same package as the microprocessor, clocked at the same frequency as the microprocessor. On-motherboard caches enjoyed prolonged popularity thanks to the AMD K6-2 and AMD K6-III processors that still used Socket 7, which was previously used by Intel with on-motherboard caches. K6-III included 256 KiB on-die L2 cache and took advantage of the on-board cache as a third level cache, named L3 (motherboards with up to 2 MiB of on-board cache were produced). After the Socket 7 became obsolete, on-motherboard cache disappeared from the x86 systems. The three-level caches were used again first with the introduction of the Intel Xeon MP "Foster Core", where the L3 cache was added to the CPU die. It became common for the total cache sizes to be increasingly larger in newer processor generations, and recently (as of 2011) it is not uncommon to find Level 3 cache sizes of tens of megabytes. Intel introduced a Level 4 on-package cache with the Haswell microarchitecture. Crystalwell Haswell CPUs, equipped with the GT3e variant of Intel's integrated Iris Pro graphics, effectively feature 128 MiB of embedded DRAM (eDRAM) on the same package. This L4 cache is shared dynamically between the on-die GPU and CPU, and serves as a victim cache to the CPU's L3 cache. The Apple M1 CPU has 128 or 192 KiB of L1 instruction cache for each core (important for latency/single-thread performance), depending on core type. This is an unusually large L1 cache for any CPU type (not just for a laptop); the total cache memory size is not unusually large (the total is more important for throughput) for a laptop, and much larger total (e.g. L3 or L4) sizes are available in IBM's mainframes. Early cache designs focused entirely on the direct cost of cache and RAM and average execution speed. More recent cache designs also consider energy efficiency, fault tolerance, and other goals. There are several tools available to computer architects to help explore tradeoffs between the cache cycle time, energy, and area; the CACTI cache simulator and the SimpleScalar instruction set simulator are two open-source options. A multi-ported cache is a cache which can serve more than one request at a time. When accessing a traditional cache we normally use a single memory address, whereas in a multi-ported cache we may request N addresses at a time – where N is the number of ports that connected through the processor and the cache. The benefit of this is that a pipelined processor may access memory from different phases in its pipeline. Another benefit is that it allows the concept of super-scalar processors through different cache levels. See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Dames_blanches] | [TOKENS: 451]
Contents Dames blanches In French mythology or folklore, Dames Blanches (meaning literally white ladies) were female spirits or supernatural beings, comparable to the Weiße Frauen of both Dutch and German mythology. The Dames Blanches were reported in the region of Lorraine and Normandy. They appear (as Damas blancas, in Occitan), in the Pyrenees mountains, where they were supposed to appear near caves and caverns. Thomas Keightley (1870) describes the Dames Blanches as a type of Fée known in Normandy "who are of a less benevolent character." They lurk in narrow places such as ravines, forest, and on bridges and try to attract passerby attention. They may require one to join in her dance or assist her in order to pass. If assisted she "makes him many courtesies, and then vanishes." One such Dame was known as La Dame d'Apringy who appeared in a ravine at the Rue Quentin at Bayeux in Normandy, where one must dance with her a few rounds to pass. Those who refused were thrown into the thistles and briar, while those who danced were not harmed. Another Dame was known on a narrow bridge in the district of Falaise, named the Pont d'Angot. She only allowed people to pass if they went on their knees to her. Anyone who refused was tormented by the lutins, cats, owls, and other creatures who helped her. Origins J. A. MacCulloch believes Dames Blanches are one of the recharacterizations of pre-Christian female goddesses, and suggested their name Dame may have derived from the ancient guardian goddesses known as the Matres, by looking at old inscriptions to guardian goddesses, specifically inscriptions to "the Dominæ, who watched over the home, perhaps became the Dames of mediæval folk-lore." The Dames Blanches have close counterparts in both name and characterization in neighboring northern countries: In Germany the Weiße Frauen and in the Dutch Low Countries the Witte Wieven. See also References This article relating to a European folklore is a stub. You can help Wikipedia by adding missing information.
========================================
[SOURCE: https://en.wikipedia.org/wiki/Auriga] | [TOKENS: 11034]
Contents Auriga Auriga is a constellation in the northern celestial hemisphere. It is one of the 88 modern constellations; it was among the 48 constellations listed by the 2nd-century astronomer Ptolemy. Its name is Latin for '(the) charioteer', associating it with various mythological beings, including Erichthonius and Myrtilus. Auriga is most prominent during winter evenings in the northern Hemisphere, as are five other constellations that have stars in the Winter Hexagon asterism. Because of its northern declination, Auriga is only visible in its entirety as far south as −34°; for observers farther south it lies partially or fully below the horizon. A large constellation, with an area of 657 square degrees, it is half the size of the largest, Hydra. Its brightest star, Capella, is an unusual multiple star system and amongst the brightest stars in the night sky. Beta Aurigae is an interesting variable star in the constellation; Epsilon Aurigae, a nearby eclipsing binary with an unusually long period, has been studied intensively. Because of its position near the winter Milky Way, Auriga has many bright open clusters in its borders, including M36, M37, and M38, popular targets for amateur astronomers. In addition, it has one prominent nebula, the Flaming Star Nebula, associated with the variable star AE Aurigae. In Chinese mythology, Auriga's stars were incorporated into several constellations, including the celestial emperors' chariots, made up of the modern constellation's brightest stars. Auriga is home to the radiant for the Aurigids, Zeta Aurigids, Delta Aurigids, and the hypothesized Iota Aurigids. History and mythology The first record of Auriga's stars was in Mesopotamia as a constellation called GAM, representing a scimitar or crook. However, this may have represented just Capella (Alpha Aurigae) or the modern constellation as a whole; this figure was alternatively called Gamlum or MUL.GAM in the MUL.APIN. The crook of Auriga stood for a goat-herd or shepherd. It was formed from most of the stars of the modern constellation; all of the bright stars were included except for Elnath, traditionally assigned to both Taurus and Auriga. Later, Bedouin astronomers created constellations that were groups of animals, where each star represented one animal. The stars of Auriga comprised a herd of goats, an association also present in Greek mythology. The association with goats carried into the Greek astronomical tradition, though it later became associated with a charioteer along with the shepherd. In Greek mythology, Auriga is often identified as the hero Erichthonius of Athens, the chthonic son of Hephaestus who was raised by the goddess Athena. Erichthonius was generally credited to be the inventor of the quadriga, the four-horse chariot, which he used in the battle against the usurper Amphictyon, the event that made Erichthonius the king of Athens. His chariot was created in the image of the Sun's chariot, the reason Zeus placed him in the heavens. The Athenian hero then dedicated himself to Athena and, soon after, Zeus raised him into the night sky in honor of his ingenuity and heroic deeds. Auriga, however, is sometimes described as Myrtilus, who was Hermes's son and the charioteer of Oenomaus. The association of Auriga and Myrtilus is supported by depictions of the constellation, which rarely show a chariot. Myrtilus's chariot was destroyed in a race intended for suitors to win the heart of Oenomaus's daughter Hippodamia. Myrtilus earned his position in the sky when Hippodamia's successful suitor, Pelops, killed him, despite his complicity in helping Pelops win her hand. After his death, Myrtilus's father Hermes placed him in the sky. Yet another mythological association of Auriga is Theseus's son Hippolytus. He was ejected from Athens after he refused the romantic advances of his stepmother Phaedra, who committed suicide as a result. He was killed when his chariot was wrecked, but revived by Asclepius. In late antiquity two Latin poets, Claudian and Nonnus, identified Auriga as Phaethon in their works. In the common version of the myth, Phaethon, the son of the sun Helios, attempted to drive his father's chariot for a day. Unable to control the chariot, Phaethon veered off course, causing chaos on the earth below. In order to avert further disaster, Zeus killed Phaethon with a thunderbolt and his burned remains fell to the earth, landing in a river. Claudian and Nonnus add to this story that Helios then placed Phaethon in the sky as the constellation Auriga. Regardless of Auriga's specific representation, it is possible that the constellation was created by the ancient Greeks to commemorate the importance of the chariot in their society. An incidental appearance of Auriga in Greek mythology is as the limbs of Medea's brother. In the myth of Jason and the Argonauts, as they journeyed home, Medea killed her brother and dismembered him, flinging the parts of his body into the sea, represented by the Milky Way. Each individual star represents a different limb. Capella is associated with the mythological she-goat Amalthea, who breast-fed the infant Zeus. It forms an asterism with the stars Epsilon Aurigae, Zeta Aurigae, and Eta Aurigae, the latter two of which are known as the Haedi (the Kids). Though most often associated with Amalthea, Capella has sometimes been associated with Amalthea's owner, a nymph. The myth of the nymph says that the goat's hideous appearance, resembling a Gorgon, was partially responsible for the Titans' defeat, because Zeus skinned the goat and wore it as his aegis. The asterism containing the goat and kids had been a separate constellation; however, Ptolemy merged the Charioteer and the Goats in the 2nd-century Almagest. Before that, Capella was sometimes seen as its own constellation—by Pliny the Elder and Manilius—called Capra, Caper, or Hircus, all of which relate to its status as the "goat star". Zeta Aurigae and Eta Aurigae were first called the "Kids" by Cleostratus, an ancient Greek astronomer. Traditionally, illustrations of Auriga represent it as a chariot and its driver. The charioteer holds a goat over his left shoulder and has two kids under his left arm; he holds the reins to the chariot in his right hand. However, depictions of Auriga have been inconsistent over the years. The reins in his right hand have also been drawn as a whip, though Capella is almost always over his left shoulder and the Kids under his left arm. The 1488 atlas Hyginus deviated from this typical depiction by showing a four-wheeled cart driven by Auriga, who holds the reins of two oxen, a horse, and a zebra. Jacob Micyllus depicted Auriga in his Hyginus of 1535 as a charioteer with a two-wheeled cart, powered by two horses and two oxen. Arabic and Turkish depictions of Auriga varied wildly from those of the European Renaissance; one Turkish atlas depicted the stars of Auriga as a mule, called Mulus clitellatus by Johann Bayer. One unusual representation of Auriga, from 17th-century France, showed Auriga as Adam kneeling on the Milky Way, with a goat wrapped around his shoulders. Occasionally, Auriga is seen not as the Charioteer but as Bellerophon, the mortal rider of Pegasus who dared to approach Mount Olympus. In this version of the tale, Jupiter pitied Bellerophon for his foolishness and placed him in the stars. Oxford research finds it likely the group was equally named Agitator in about the 15th century and provides a quotation as late as 1623, from a Gerard de Malynes multi-topic work. Some of the stars of Auriga were incorporated into a now-defunct constellation called Telescopium Herschelii. This constellation was introduced by Maximilian Hell to honor William Herschel's discovery of Uranus. Originally, it included two constellations, Tubus Hershelii Major [sic], in Gemini, Lynx, and Auriga, and Tubus Hershelii Minor [sic] in Orion and Taurus; both represented Herschel's telescopes. Johann Bode combined Hell's constellations into Telescopium Herschelii in 1801, located mostly in Auriga. Since the time of Ptolemy, Auriga has remained a constellation and is officially recognized by the International Astronomical Union, although like all modern constellations, it is now defined as a specific region of the sky that includes both the ancient pattern and the surrounding stars. In 1922, the IAU designated its recommended three-letter abbreviation, "Aur". The official boundaries of Auriga were created in 1930 by Belgian astronomer Eugène Delporte as a polygon of 20 segments. Its right ascension is between 4h 37.5m and 7h 30.5m and its declination is between 27.9° and 56.2° in the equatorial coordinate system. The stars of Auriga were incorporated into several Chinese constellations. Wuche, the five chariots of the celestial emperors and the representation of the grain harvest, was a constellation formed by Alpha Aurigae, Beta Aurigae, Beta Tauri, Theta Aurigae, and Iota Aurigae. Sanzhu or Zhu was one of three constellations which represented poles for horses to be tethered. They were formed by the triplets of Epsilon, Zeta, and Eta Aurigae; Nu, Tau, and Upsilon Aurigae; and Chi and 26 Aurigae, with one other undetermined star. Xianchi, the pond where the sun set and Tianhuang, a pond, bridge, or pier, were other constellations in Auriga, though the stars that composed them are undetermined. Zuoqi, representing chairs for the emperor and other officials, was made up of nine stars in the east of the constellation. Bagu, a constellation mostly formed from stars in Camelopardalis representing different types of crops, included the northern stars of Delta and Xi Aurigae. In ancient Hindu astronomy, Capella represented the heart of Brahma and was important religiously. Ancient Peruvian peoples saw Capella, called Colca, as a star intimately connected to the affairs of shepherds. In Brazil, the Bororo people incorporate the stars of Auriga into a massive constellation representing a caiman; its southern stars represent the end of the animal's tail. The eastern portion of Taurus is the rest of the tail, while Orion is its body and Lepus is the head. This constellation arose because of the prominence of caymans in daily Amazonian life. There is evidence that Capella was significant to the Aztec people, as the Late Classic site Monte Albán has a marker for the star's heliacal rising. Indigenous peoples of California and Nevada also noticed the bright pattern of Auriga's stars. To them, the constellation's bright stars formed a curve that was represented in crescent-shaped petroglyphs. The indigenous Pawnee of North America recognized a constellation with the same major stars as modern Auriga: Alpha, Beta, Gamma (Beta Tauri), Theta, and Iota Aurigae. The people of the Marshall Islands featured Auriga in the myth of Dümur, which tells the story of the creation of the sky. Antares in Scorpius represents Dümur, the oldest son of the stars' mother, and the Pleiades represent her youngest son. The mother of the stars, Ligedaner, is represented by Capella; she lived on the island of Alinablab. She told her sons that the first to reach an eastern island would become the King of the Stars, and asked Dümur to let her come in his canoe. He refused, as did each of her sons in turn, except for Pleiades. Pleiades won the race with the help of Ligedaner, and became the King of the Stars. Elsewhere in the central Caroline Islands, Capella was called Jefegen uun (variations include efang alul, evang-el-ul, and iefangel uul), meaning "north of Aldebaran". Different names were noted for Auriga and Capella in Eastern Pacific societies. On Pukapuka, the figure of modern Auriga was called Te Wale-o-Tutakaiolo ("The house of Tutakaiolo"); in the Society Islands, it was called Faa-nui ("Great Valley"). Capella itself was called Tahi-anii ("Unique Sovereign") in the Societies. Hoku-lei was the name for Capella but may have been the name for the whole constellation; the name means "Star-wreath" and refers to one of the wives of the Pleiades, called Makalii. The stars of Auriga feature in Inuit constellations. Quturjuuk, meaning "collar-bones", was a constellation that included Capella (Alpha Aurigae), Menkalinan (Beta Aurigae), Pollux (Beta Geminorum), and Castor (Alpha Geminorum). Its rising signalled that the constellation Aagjuuk, made up of Altair (Alpha Aquilae), Tarazed (Gamma Aquilae), and sometimes Alshain (Beta Aquilae), would rise soon. Aagjuuk, which represented the dawn following the winter solstice, was an incredibly important constellation in the Inuit mythos. It was also used for navigation and time-keeping at night. Features Alpha Aurigae (Capella), the brightest star in Auriga, is a G8III class star (G-type giant) 43 light-years away and the sixth-brightest star in the night sky at magnitude 0.08. Its traditional name is a reference to its mythological position as Amalthea; it is sometimes called the "Goat Star". Capella's names all point to this mythology. In Arabic, Capella was called al-'Ayyuq, meaning "the goat", and in Sumerian, it was called mul.ÁŠ.KAR, "the goat star". On Ontong Java, Capella was called ngahalapolu. Capella is a spectroscopic binary with a period of 104 days; the components are both yellow giants, more specifically, the primary is a G-type star and the secondary is between a G-type and F-type star in its evolution. The secondary is formally classified as a G0III class star (G-type giant). The primary has a radius of 11.87 solar radii (R☉) and a mass of 2.47 solar masses (M☉); the secondary has a radius of 8.75 R☉ and a mass of 2.44 M☉. The two components are separated by 110 million kilometers, almost 75% of the distance between the Earth and the Sun. The star's status as a binary was discovered in 1899 at the Lick Observatory; its period was determined in 1919 by J.A. Anderson at the 100-inch Mt. Wilson Observatory telescope. It appears with a golden-yellow hue, though Ptolemy and Giovanni Battista Riccioli both described its color as red, a phenomenon attributed not to a change in Capella's color but to the idiosyncrasies of their color sensitivities. Capella has an absolute magnitude of 0.3 and a luminosity of 160 times the luminosity of the Sun, or 160 L☉ (the primary is 90 L☉ and the secondary is 70 L☉). It may be loosely associated with the Hyades, an open cluster in Taurus, because of their similar proper motion. Capella has one more companion, Capella H, which is a pair of red dwarf stars located 11,000 astronomical units (0.17 light-years) from the main pair. Beta Aurigae (Menkalinan, Menkarlina) is a bright A2IV class star (A-type subgiant). Its Arabic name comes from the phrase mankib dhu al-'inan, meaning "shoulder of the charioteer" and is a reference to Beta Aurigae's location in the constellation. Menkalinan is 81 light-years away and has a magnitude of 1.90. Like Epsilon Aurigae, it is an eclipsing binary star that varies in magnitude by 0.1m. The two components are blue-white stars that have a period of 3.96 days. Its double nature was revealed spectroscopically in 1890 by Antonia Maury, making it the second spectroscopic binary discovered, and its variable nature was discovered photometrically 20 years later by Joel Stebbins. Menkalinan has an absolute magnitude of 0.6 and a luminosity of 50 L☉. The component of its motion in the direction of Earth is 18 kilometres (11 mi) per second. Beta Aurigae may be associated with a stream of about 70 stars including Delta Leonis and Alpha Ophiuchi; the proper motion of this group is comparable to that of the Ursa Major Moving Group, though the connection is only hypothesized. Besides its close eclipsing companion, Menkalinan has two other stars associated with it. One is an unrelated optical companion, discovered in 1783 by William Herschel; it has a magnitude of 10.5 and has a separation of 184 arcseconds. The other is likely associated gravitationally with the primary, as determined by their common proper motion. This 14th-magnitude star was discovered in 1901 by Edward Emerson Barnard. It has a separation of 12.6 arcseconds, and is around 350 astronomical units from the primary. Besides particularly bright stars of Alpha and Beta Aurigae, Auriga has many dimmer naked-eye visible stars. Gamma Aurigae, now known under its once co-name Beta Tauri (El Nath, Alnath) is a B7III class star (B-type giant). At about +1.65 it would rank a clear third in apparent magnitude if still co-placed in Auriga. It is a mercury-manganese star, with some large signatures of heavy elements. Iota Aurigae, also called Hasseleh and Kabdhilinan, is a K3II class star (K-type bright giant) of magnitude 2.69; it is about 494 light-years away from Earth. It evolved from a B-type star to K-type over the estimated 30–45 million years since its birth. It has an absolute magnitude of −2.3 and a luminosity of 700 L☉. It is classed as a particularly luminous bright giant but its light is in part "extinguished" (blocked) by intra-galactic dust clouds — astronomers estimate by these it appears 0.6 magnitudes fainter. It is also a hybrid star, an x-ray producing giant star that emits x-rays from its corona and has a cool stellar wind. Though its proper motion is just 0.02 arcseconds per year, it has a radial velocity of 10.5 miles (16.9 km) per second in recession. The traditional name Kabdhilinan, sometimes shortened to "Alkab", comes from the Arabic phrase al-kab dh'il inan, meaning "shoulder of the rein holder". Iota may end as a supernova, but because it is close to the mass limit for such stars, it may instead become a white dwarf. Delta Aurigae, also known as Bagu the northernmost bright star in Auriga, is a K0III-type star (K-type giant), 126 light-years from Earth and approximately 1.3 billion years old. It has a magnitude of 3.72, an absolute magnitude of 0.2, and a luminosity of 60 L☉. About 12 times the radius of the Sun, Delta weighs only two solar masses and rotates with a period of almost one year. Though it is often listed as a single star, it actually has three very widely spaced optical companions. One is a double star of magnitude 11, two arcminutes apart; the other is a star of magnitude 10, three arcminutes apart. Lambda Aurigae (Al Hurr) is a G1.5IV-V-type star (G-type star intermediate between a subgiant and main-sequence star) of magnitude 4.71. It has an absolute magnitude of 4.4 and is 41 light-years from Earth. It has very weak emissions in the infrared spectrum, like Epsilon Aurigae. In photometric observations of Epsilon, an unusual variable, Lambda is commonly used as a comparison star. It is reaching the end of its hydrogen-fusing lifespan at an age of 6.2 billion years. It also has an unusually high radial velocity at 83 km/second. Though older than the Sun, it is similar in many ways; its mass is 1.07 solar masses, a radius of 1.3 solar radii, and a rotational period of 26 days. However, it differs from the Sun in its metallicity; its iron content is 1.15 times that of the Sun and it has relatively less nitrogen and carbon. Like Delta, it has several optical companions and is often categorized as a single star. The brightest companions are of magnitude 10, separated by 175 and 203 arcseconds. The dimmer companions are of magnitude 13 and 14, 87 and 310 arcseconds from Lambda, respectively. Nu Aurigae is a G9.5III (G-type giant) star of magnitude 3.97, 230 light-years from Earth. It has a luminosity of 60 L☉ and an absolute magnitude of 0.2. Nu is a giant star with a radius of 20–21 solar radii and a mass of approximately 3 solar masses. It may technically be a binary star; its companion, sometimes listed as optical and separated by 56 arcseconds, is a dwarf star of spectral type K6 and magnitude 11.4. Its period is more than 120,000 years and it orbits at least 3,700 AU from the primary. The most prominent variable star in Auriga is Epsilon Aurigae (Almaaz), an F0 class eclipsing binary star with an unusually long period of 27 years; its last minima occurred in 1982–1984 and 2009–2011. The distance to the system is disputed, variously cited as 4600 and 2,170 light-years. The primary is a white supergiant, and the secondary may be itself a binary star within a large dusty disk. Its maximum magnitude is 3.0, but it stays at a minimum magnitude of 3.8 for around a year; its most recent eclipse began in 2009. The primary has an absolute magnitude of −8.5 and an unusually high luminosity of 200,000 L☉, the reason it appears so bright at such a great distance. Epsilon Aurigae is the longest-period eclipsing binary currently known. The first observed eclipse of Epsilon Aurigae occurred in 1821, though its variable status was not confirmed until the eclipse of 1847–48. From that time forward, many theories were put forth as to the nature of the eclipsing component. Epsilon Aurigae has a noneclipsing component, which is visible as a 14th magnitude companion separated from the primary by 28.6 arcseconds. It was discovered by Sherburne Wesley Burnham in 1891 at the Dearborn Observatory, and is about 0.5 light-years from the primary. Another eclipsing binary in Auriga, part of the Haedi asterism with Eta Aurigae, is Zeta Aurigae (Saclateni), an eclipsing binary star at a distance of 776 light-years with a period of 2 years and 8 months. It has an absolute magnitude of −2.3. The primary is an orange-hued K5II-type star (K-type bright giant) and the secondary is a smaller blue star similar to Regulus; its period is 972 days. The secondary is a B7V-type star, a B-type main-sequence star. Zeta Aurigae's maximum magnitude is 3.7 and its minimum magnitude is 4.0. The full eclipse of the small blue star by the orange giant lasts 38 days, with two partial phases of 32 days at the beginning and end. The primary has a diameter of 150 D☉ and a luminosity of 700 L☉; the secondary has a diameter of 4 D☉ and a luminosity of 140 L☉. Zeta Aurigae was spectroscopically determined to be a double star by Antonia Maury in 1897 and was confirmed as a binary star in 1908 by William Wallace Campbell. The two stars orbit each other about 500,000,000 miles (800,000,000 km) apart. Zeta Aurigae is moving away from Earth at a rate of 8 miles (13 km) per second. The second of the two Haedi or "Kids" is Eta Aurigae, a B3 class star located 243 light-years from Earth with a magnitude of 3.17. It is a B3V class star, meaning that it is a blue-white hued main-sequence star. Eta Aurigae has an absolute magnitude of −1.7 and a luminosity of 450 L☉. Eta Aurigae is moving away from Earth at a rate of 4.5 miles (7.2 km) per second. T Aurigae (Nova Aurigae 1891) was a nova discovered at magnitude 5.0 on January 23, 1892, by Thomas David Anderson. It became visible to the naked eye by December 10, 1891, as shown on photographic plates examined after the nova's discovery. It then brightened by a factor of 2.5 from December 11 to December 20, when it reached a maximum magnitude of 4.4. T Aurigae faded slowly in January and February 1892, then faded quickly during March and April, reaching a magnitude of 15 in late April. However, its brightness began to increase in August, reaching magnitude 9.5, where it stayed until 1895. Over the subsequent two years, its brightness decreased to 11.5, and by 1903, it was approximately 14th magnitude. By 1925, it had reached its current magnitude of 15.5. When the nova was discovered, its spectrum showed material moving at a high speed towards Earth. However, when the spectrum was examined again in August 1892, it appeared to be a planetary nebula. Observations at the Lick Observatory by Edward Emerson Barnard showed it to be disc-shaped, with clear nebulosity in a diameter of 3 arcseconds. The shell had a diameter of 12 arcseconds in 1943. T Aurigae is classified as a slow nova, similar to DQ Herculis. Like DQ Herculis, WZ Sagittae, Nova Persei 1901 and Nova Aquilae 1918, it is a very close binary with a very short period. T Aurigae's period of 4.905 hours is comparable to DQ Herculis's period of 4.65 hours, and it has a partial eclipse period of 40 minutes. There are many other variable stars of different types in Auriga. ψ1 Aurigae is an orange-hued supergiant, which ranges between magnitudes 4.8 and 5.7, though not with a regular period. It has a spectral class of K5Iab, an average magnitude of 4.91, and an absolute magnitude of −5.7. It lies 3,976 light-years from Earth. RT Aurigae is a Cepheid variable which ranges between magnitudes 5.0 and 5.8 over a period of 3.7 days. A yellow-white supergiant, it lies at a distance of 1,600 light-years. It was discovered to be variable by English amateur T.H. Astbury in 1905. It has a spectral class of F81bv, meaning that it is an F-type supergiant star. RX Aurigae is a Cepheid variable as well; it varies in magnitude from a minimum of 8.0 to a maximum of 7.3; its spectral class is G0Iabv. It has a period of 11.62 days. RW Aurigae is the prototype of its class of irregular variable stars. Its variability was discovered in 1906 by Lydia Ceraski at the Moscow Observatory. RW Aurigae's spectrum indicates a turbulent stellar atmosphere, and has prominent emission lines of calcium and hydrogen. Its spectral type is G5V:e. SS Aurigae is an SS Cygni-type variable star, classified as an explosive dwarf. Discovered by Emil Silbernagel in 1907, it is almost always at its minimum magnitude of 15, but brightens to a maximum up to 60 times brighter than the minimum an average of every 55 days, though the period can range from 50 days to more than 100 days. It takes about 24 hours for the star to go from its minimum to maximum magnitude. SS Aurigae is a very close binary star with a period of 4 hours and 20 minutes. Both components are small subdwarf stars; there has been dispute in the scientific community about which star originates the outbursts. UU Aurigae is a variable red giant star at a distance of 2,000 light-years. It has a period of approximately 234 days and ranges between magnitudes 5.0 and 7.0. AE Aurigae is a blue-hued main-sequence variable star. It is normally of magnitude 6.0, but its magnitude varies irregularly. AE Aurigae is associated with the 9-light-year-wide Flaming Star Nebula (IC 405), which it illuminates. However, AE Aurigae likely entered the nebula only recently, as determined through the discrepancy between the radial velocities of the star and the nebula, 36 miles (58 km) per second and 13 miles (21 km) per second, respectively. It has been hypothesized that AE Aurigae is a "runaway star" from the young cluster in the Orion Nebula, leaving the cluster approximately 2.7 million years ago. It is similar to 53 Arietis and Mu Columbae, other runaway stars from the Orion cluster. Its spectral class is O9.5Ve, meaning that it is an O-type main-sequence star. The Flaming Star Nebula, is located near IC 410 in the celestial sphere. IC 410 obtained its name from its appearance in long exposure astrophotographs; it has extensive filaments that make AE Aurigae appear to be on fire. There are four Mira variable stars in Auriga: R Aurigae, UV Aurigae, U Aurigae, and X Aurigae, all of which are type M stars. More specifically, R Aurigae is of type M7III, UV Aurigae is of type C6 (a carbon star), U Aurigae is of type M9, and X Aurigae is of type K2. R Aurigae, with a period of 457.5 days, ranges in magnitude from a minimum of 13.9 to a maximum of 6.7. UV Aurigae, with a period of 394.4 days, ranges in magnitude from a minimum of 10.6 to a maximum of 7.4. U Aurigae, with a period of 408.1 days, ranges in magnitude from a minimum of 13.5 to a maximum of 7.5. X Aurigae, with a particularly short period of 163.8 days, ranges in magnitude from a minimum of 13.6 to a maximum of 8.0. Auriga is home to several less prominent binary and double stars. Theta Aurigae (Bogardus, Mahasim) is a blue-white A0p class binary star of magnitude 2.62 with a luminosity of 75 L☉. It has an absolute magnitude of 0.1 and is 165 light-years from Earth. The secondary is a yellow star of magnitude 7.1, which requires a telescope of 100 millimetres (3.9 in) in aperture to resolve; the two stars are separated by 3.6 arcseconds. It is the eastern vertex of the constellation's pentagon. Theta Aurigae is moving away from Earth at a rate of 17.5 miles (28.2 km) per second. Theta Aurigae additionally has a second optical companion, discovered by Otto Wilhelm von Struve in 1852. The separation was at 52 arcseconds in 1978 and has been increasing since then because of the proper motion of Theta Aurigae, 0.1 arcseconds per year. The separation of this magnitude 9.2 component was 2.2 arcminutes (130.7 arcseconds) in 2007 with an angle of 350°. 4 Aurigae is a double star at a distance of 159 light-years. The primary is of magnitude 5.0 and the secondary is of magnitude 8.1. 14 Aurigae is a white optical binary star. The primary is of magnitude 5.0 and is at a distance of 270 light-years; the secondary is of magnitude 7.9 and is at a distance of 82 light-years. HD 30453 is spectroscopic binary of magnitude 5.9, with a spectral type assessed as either A8m or F0m, and a period of seven days. There are several stars with confirmed planetary systems in Auriga; there is also a white dwarf with a suspected planetary system. HD 40979 has one planet, HD 40979 b. It was discovered in 2002 through radial velocity measurements on the parent star. HD 40979 is 33.3 parsecs from Earth, a spectral class F8V star of magnitude 6.74 — just past the limit of visibility to the naked eye. It is of similar size to the Sun, at 1.1 solar masses and 1.21 solar radii. The planet, with a mass of 3.83 Jupiter masses, orbits with a semi-major axis of 0.83 AU and a period of 263.1 days. HD 45350 has one planet as well. HD 45350 b was discovered through radial velocity measurements in 2004. It has a mass of 1.79 Jupiter masses and orbits every 890.76 days at a distance of 1.92 AU. Its parent star is faint, at an apparent magnitude of 7.88, a G5IV type star 49 parsecs away. It has a mass of 1.02 solar masses and a radius of 1.27 solar radii. HD 43691 b is a significantly larger planet, with a mass of 2.49 Jupiter masses; it is also far closer to its parent star, HD 43691. Discovered in 2007 from radial velocity measurements, it orbits at a distance of 0.24 AU with a period of 36.96 days. HD 43691 has a radius identical to the Sun's, though it is more dense—its mass is 1.38 solar masses. It is a G0IV type star of magnitude 8.03, 93.2 parsecs from Earth. HD 49674 is a star in Auriga with one planet orbiting it. This G3V type star is faint, at magnitude 8.1, and fairly distant, at 40.7 parsecs from Earth. Like the other stars, it is similar in size to the Sun, with a mass of 1.07 solar masses and a radius of 0.94 solar radii. Its planet, HD 49674 b, is a smaller planet, at 0.115 Jupiter masses. It orbits very close to its star, at 0.058 AU, every 4.94 days. HD 49674 b was discovered by radial velocity observations in 2002. HAT-P-9 b is the first transiting exoplanet confirmed in Auriga, orbiting the star HAT-P-9. Unlike the other exoplanets in Auriga, detected by radial velocity measurements, HAT-P-9 b was detected using the transit method in 2008. It has a mass of 0.67 Jupiter masses and orbits just 0.053 AU from its parent star, with a period of 3.92 days; its radius is 1.4 Jupiter radii, making it a hot Jupiter. Its parent star, HAT-P-9, is an F-type star approximately 480 parsecs from Earth. It has a mass of 1.28 solar masses and a radius of 1.32 solar radii. The star KELT-2A (HD 42176A) is the brightest star in Auriga known to host a transiting exoplanet, KELT-2Ab, and is the fifth-brightest transit hosting star overall. The brightness of the star KELT-2A allows the mass and radius of the planet KELT-2Ab to be known quite precisely. KELT-2Ab is 1.524 Jupiter masses and 1.290 Jupiter radii and on a 4.11-day-long orbit, making it another hot Jupiter, similar to HAT-P-9b. The star KELT-2A is a late F-dwarf and is one member of the common-proper-motion binary star system KELT-2. KELT-2B is an early K-dwarf about 295 AU away, and was discovered the same time as the exoplanet. Auriga has the galactic anticenter, about 3.5° to the east of Beta Aurigae. This is the point on the celestial sphere opposite the Galactic Center; it is the edge of the galactic plane roughly nearest to the Solar System. Ignoring nearby bright stars in the foreground this is a smaller and less luminous part of the Milky Way than looking towards the rest of its arms or central bar and has dust bands of the outer spiral arms. Auriga has many open clusters and other objects; rich star-forming arms of the Milky Way - including the Perseus Arm and the Orion–Cygnus Arm - run through it. The three brightest open clusters are M36, M37 and M38, all of which are visible in binoculars or a small telescope in suburban skies. A larger telescope resolves individual stars. Three other open clusters are NGC 2281, lying close to ψ7 Aurigae, NGC 1664, which is close to ε Aurigae, and IC 410 (surrounding NGC 1893), a cluster with nebulosity next to IC 405, the Flaming Star Nebula, found about midway between M38 and ι Aurigae. AE Aurigae, a runaway star, is a bright variable star currently within the Flaming Star Nebula. M36 (NGC 1960) is a young galactic open cluster with approximately 60 stars, most of which are relatively bright; however, only about 40 stars are visible in most amateur instruments. It is at a distance of 3,900 light-years and has an overall magnitude of 6.0; it is 14 light-years wide. Its apparent diameter is 12.0 arcminutes. Of the three open clusters in Auriga, M36 is both the smallest and the most concentrated, though its brightest stars are approximately 9th magnitude. It was discovered in 1749 by Guillaume Le Gentil, the first of Auriga's major open clusters to be discovered. M36 features a 10-arcminute-wide knot of bright stars in its center, anchored by Struve 737, a double star with components separated by 10.7 arcseconds. Most of the stars in M36 are B type stars with rapid rates of rotation. M36's Trumpler class is given as both I 3 r and II 3 m. Besides the central knot, most of the cluster's other stars appear in smaller knots and groups. M37 (NGC 2099) is an open cluster, larger than M36 and at a distance of 4,200 light-years. It has 150 stars, making it the richest cluster in Auriga; the most prominent member is an orange star that appears at the center. M37 is approximately 25 light-years in diameter. It is the brightest open cluster in Auriga with a magnitude of 5.6; it has an apparent diameter of 23.0 arcminutes. M37 was discovered in 1764 by Charles Messier, the first of many astronomers to laud its beauty. It was described as "a virtual cloud of glittering stars" by Robert Burnham Jr. and Charles Piazzi Smyth commented that the star field was "strewed [sic]...with sparkling gold-dust". The stars of M37 are older than those of M36; they are approximately 200 million years old. Most of the constituent stars are A type stars, though there are at least 12 red giants in the cluster as well. M37's Trumpler class is given as both I 2 r and II 1 r. The stars visible in a telescope range in magnitude from 9.0 to 13.0; there are two 9th magnitude stars in the center of the cluster and an east to west chain of 10th and 11th magnitude stars. M38 is a diffuse open cluster at a distance of 3,900 light-years, the least concentrated of the three main open clusters in Auriga; it is classified as a Trumpler Class II 2 r or III 2 r cluster because of this. It appears as a cross-shaped or pi-shaped object in a telescope and contains approximately 100 stars; its overall magnitude is 6.4. M38, like M36, was discovered by Guillaume Le Gentil in 1749. It has an apparent diameter of approximately 20 arcseconds and a true diameter of about 25 light-years. Unlike M36 or M37, M38 has a varied stellar population. The majority of the population consists of A and B type main sequence stars, the B type stars being the oldest members, and a number of G type giant stars. One yellow-hued G type star is the brightest star in M38 at a magnitude of 7.9. The brightest stars in M38 are magnitude 9 and 10. M38 is accompanied by NGC 1907, a smaller and dimmer cluster that lies half a degree south-southwest of M38; it is at a distance of 4,200 light-years. The smaller cluster has an overall magnitude of 8.2 and a diameter of 6.0 arcminutes, making it about a third the size of M38. However, NGC 1907 is a rich cluster, classified as a Trumpler Class I 1 m n cluster. It has approximately 12 stars of magnitude 9–10, and at least 25 stars of magnitude 9–12. IC 410, a faint nebula, is accompanied by the bright open cluster NGC 1893. The cluster is thin, with a diameter of 12 arcminutes and a population of approximately 20 stars. Its accompanying nebula has very low surface brightness, partially because of its diameter of 40 arcminutes. It appears in an amateur telescope with brighter areas in the north and south; the brighter southern patch shows a pattern of darker and lighter spots in a large instrument. NGC 1893, of magnitude 7.5, is classified as a Trumpler Class II 3 r n or II 2 m n cluster, meaning that it is not very large and is somewhat bright. The cluster possesses approximately 30 stars of magnitude 9–12. In an amateur instrument, IC 410 is only visible with an Oxygen-III filter. NGC 2281 is a small open cluster at a distance of 1,500 light-years. It contains 30 stars in a crescent shape. It has an overall magnitude of 5.4 and a fairly large diameter of 14.0 arcseconds, classified as a Trumpler Class I 3 m cluster. The brightest star in the cluster is magnitude 8; there are approximately 12 stars of magnitude 9–10 and 20 stars of magnitude 11–13. NGC 1931 is a nebula in Auriga, slightly more than one degree to the west of M36. It is considered to be a difficult target for an amateur telescope. NGC 1931 has an approximate integrated magnitude of 10.1; it is 3 by 3 arcminutes. However, it appears to be elongated in an amateur telescope. Some observers may note a green hue in the nebula; a large telescope will easily show the nebula's "peanut" shape, as well as the quartet of stars that are engulfed by the nebula. The open cluster portion of NGC 1931 is classed as a I 3 p n cluster; the nebula portion is classed as both an emission and reflection nebula. NGC 1931 is approximately 6,000 light-years from Earth and could easily be confused with a comet in the eyepiece of a telescope. NGC 1664 is a fairly large open cluster, with a diameter of 18 arcminutes, and moderately bright, with a magnitude of 7.6, comparable to several other open clusters in Auriga. One open cluster with a similar magnitude is NGC 1778, with a magnitude of 7.7. This small cluster has a diameter of 7 arcminutes and contains 25 stars. NGC 1857, a small cluster, is slightly brighter at magnitude 7.0. It has a diameter of 6 arcminutes and contains 40 stars, making it far more concentrated than the similar-sized NGC 1778. Far dimmer than the other open clusters is NGC 2126 at magnitude 10.2. Despite its dimness, NGC 2126 is as concentrated as NGC 1857, having 40 stars in a diameter of 6 arcminutes. Auriga is home to two meteor showers. The Aurigids, named for the entire constellation and formerly called the "Alpha Aurigids", are renowned for their intermittent outbursts, such as those in 1935, 1986, 1994, and 2007. They are associated with the comet Kiess (C/1911 N1), discovered in 1911 by Carl Clarence Kiess. The association was discovered after the outburst in 1935 by Cuno Hoffmeister and Arthur Teichgraeber. The Aurigid outburst on September 1, 1935, prompted the investigation of a connection with Comet Kiess, though the 24-year delay between the comet's return caused doubt in the scientific community. However, the outburst in 1986 erased much of this doubt. Istvan Teplickzky, a Hungarian amateur meteor observer, observed many bright meteors radiating from Auriga in a fashion very similar to the confirmed 1935 outburst. Because the position of Teplickzky's observed radiant and the 1935 radiant were close to the position of Comet Kiess, the comet was confirmed as the source of the Aurigid meteor stream. The Aurigids had a spectacular outburst in 1994, when many grazing meteors—those that have a shallow angle of entry and seem to rise from the horizon—were observed in California. The meteors were tinted blue and green, moved slowly, and left trails at least 45° long. Because they had such a shallow angle of entry, some 1994 Aurigids lasted up to 2 seconds. Though there were only a few visual observers for part of the outburst, the 1994 Aurigids peak, which lasted less than two hours, was later confirmed by Finnish amateur radio astronomer Ilkka Yrjölä. The connection with Comet Kiess was finally confirmed in 1994. The 2007 outburst of the Aurigids was predicted by Peter Jenniskens and was observed by astronomers worldwide. Despite some predictions that there would be no Alpha Aurigid outburst, many bright meteors were observed throughout the shower, which peaked on September 1 as predicted. Much like in the 1994 outburst, the 2007 Aurigids were very bright and often colored blue and green. The maximum zenithal hourly rate was 100 meteors per hour, observed at 4:15 am, California time (12:15 UTC) by a team of astronomers flying on NASA planes. The Aurigids are normally a placid Class II meteor shower that peaks in the early morning hours of September 1, beginning on August 28 every year. Though the maximum zenithal hourly rate is 2–5 meteors per hour, the Aurigids are fast, with an entry velocity of 67 kilometres per second (42 mi/s). The annual Aurigids have a radiant located about two degrees north of Theta Aurigae, a third-magnitude star in the center of the constellation. The Aurigids end on September 4. Some years, the maximum rate has reached 9–30 meteors per hour. The other meteor showers radiating from Auriga are far less prominent and capricious than the Alpha Aurigids. The Zeta Aurigids are a weak shower with a northern and southern branch lasting from December 11 to January 21. The shower peaks on January 1 and has very slow meteors, with a maximum rate of 1–5 meteors per hour. It was discovered by William Denning in 1886 and was discovered to be the source of rare fireballs by Alexander Stewart Herschel. There is another faint stream of meteors called the "Aurigids", unrelated to the September shower. This shower lasts from January 31 to February 23, peaking from February 5 through February 10; its slow meteors peak at a rate of approximately 2 per hour. The Delta Aurigids are a faint shower radiating from Auriga. It was discovered by a group of researchers at New Mexico State University and has a very low peak rate. The Delta Aurigids last from September 22 through October 23, peaking between October 6 and October 15. They may be related to the September Epsilon Perseids, though they are more similar to the Coma Berenicids in that the Delta Aurigids last longer and have a dearth of bright meteors. They too have a hypothesized connection to an unknown short period retrograde comet. The Iota Aurigids are a hypothesized shower occurring in mid-November; its parent body may be the asteroid 2000 NL10, but this connection is highly disputed. The hypothesized Iota Aurigids may instead be a faint stream of Taurids. See also References SIMBAD External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/United_States#cite_note-197] | [TOKENS: 17273]
Contents United States The United States of America (USA), also known as the United States (U.S.) or America, is a country primarily located in North America. It is a federal republic of 50 states and a federal capital district, Washington, D.C. The 48 contiguous states border Canada to the north and Mexico to the south, with the semi-exclave of Alaska in the northwest and the archipelago of Hawaii in the Pacific Ocean. The United States also asserts sovereignty over five major island territories and various uninhabited islands in Oceania and the Caribbean.[j] It is a megadiverse country, with the world's third-largest land area[c] and third-largest population, exceeding 341 million.[k] Paleo-Indians first migrated from North Asia to North America at least 15,000 years ago, and formed various civilizations. Spanish colonization established Spanish Florida in 1513, the first European colony in what is now the continental United States. British colonization followed with the 1607 settlement of Virginia, the first of the Thirteen Colonies. Enslavement of Africans was practiced in all colonies by 1770 and supplied most of the labor for the Southern Colonies' plantation economy. Clashes with the British Crown began as a civil protest over the illegality of taxation without representation in Parliament and the denial of other English rights. They evolved into the American Revolution, which led to the Declaration of Independence and a society based on universal rights. Victory in the 1775–1783 Revolutionary War brought international recognition of U.S. sovereignty and fueled westward expansion, further dispossessing native inhabitants. As more states were admitted, a North–South division over slavery led the Confederate States of America to declare secession and fight the Union in the 1861–1865 American Civil War. With the United States' victory and reunification, slavery was abolished nationally. By the late 19th century, the U.S. economy outpaced the French, German and British economies combined. As of 1900, the country had established itself as a great power, a status solidified after its involvement in World War I. Following Japan's attack on Pearl Harbor in 1941, the U.S. entered World War II. Its aftermath left the U.S. and the Soviet Union as rival superpowers, competing for ideological dominance and international influence during the Cold War. The Soviet Union's collapse in 1991 ended the Cold War, leaving the U.S. as the world's sole superpower. The U.S. federal government is a representative democracy with a president and a constitution that grants separation of powers under three branches: legislative, executive, and judicial. The United States Congress is a bicameral national legislature composed of the House of Representatives (a lower house based on population) and the Senate (an upper house based on equal representation for each state). Federalism grants substantial autonomy to the 50 states. In addition, 574 Native American tribes have sovereignty rights, and there are 326 Native American reservations. Since the 1850s, the Democratic and Republican parties have dominated American politics. American ideals and values are based on a democratic tradition inspired by the American Enlightenment movement. A developed country, the U.S. ranks high in economic competitiveness, innovation, and higher education. Accounting for over a quarter of nominal global GDP, its economy has been the world's largest since about 1890. It is the wealthiest country, with the highest disposable household income per capita among OECD members, though its wealth inequality is highly pronounced. Shaped by centuries of immigration, the culture of the U.S. is diverse and globally influential. Making up more than a third of global military spending, the country has one of the strongest armed forces and is a designated nuclear state. A member of numerous international organizations, the U.S. plays a major role in global political, cultural, economic, and military affairs. Etymology Documented use of the phrase "United States of America" dates back to January 2, 1776. On that day, Stephen Moylan, a Continental Army aide to General George Washington, wrote a letter to Joseph Reed, Washington's aide-de-camp, seeking to go "with full and ample powers from the United States of America to Spain" to seek assistance in the Revolutionary War effort. The first known public usage is an anonymous essay published in the Williamsburg newspaper The Virginia Gazette on April 6, 1776. Sometime on or after June 11, 1776, Thomas Jefferson wrote "United States of America" in a rough draft of the Declaration of Independence, which was adopted by the Second Continental Congress on July 4, 1776. The term "United States" and its initialism "U.S.", used as nouns or as adjectives in English, are common short names for the country. The initialism "USA", a noun, is also common. "United States" and "U.S." are the established terms throughout the U.S. federal government, with prescribed rules.[l] "The States" is an established colloquial shortening of the name, used particularly from abroad; "stateside" is the corresponding adjective or adverb. "America" is the feminine form of the first word of Americus Vesputius, the Latinized name of Italian explorer Amerigo Vespucci (1454–1512);[m] it was first used as a place name by the German cartographers Martin Waldseemüller and Matthias Ringmann in 1507.[n] Vespucci first proposed that the West Indies discovered by Christopher Columbus in 1492 were part of a previously unknown landmass and not among the Indies at the eastern limit of Asia. In English, the term "America" usually does not refer to topics unrelated to the United States, despite the usage of "the Americas" to describe the totality of the continents of North and South America. History The first inhabitants of North America migrated from Siberia approximately 15,000 years ago, either across the Bering land bridge or along the now-submerged Ice Age coastline. Small isolated groups of hunter-gatherers are said to have migrated alongside herds of large herbivores far into Alaska, with ice-free corridors developing along the Pacific coast and valleys of North America in c. 16,500 – c. 13,500 BCE (c. 18,500 – c. 15,500 BP). The Clovis culture, which appeared around 11,000 BCE, is believed to be the first widespread culture in the Americas. Over time, Indigenous North American cultures grew increasingly sophisticated, and some, such as the Mississippian culture, developed agriculture, architecture, and complex societies. In the post-archaic period, the Mississippian cultures were located in the midwestern, eastern, and southern regions, and the Algonquian in the Great Lakes region and along the Eastern Seaboard, while the Hohokam culture and Ancestral Puebloans inhabited the Southwest. Native population estimates of what is now the United States before the arrival of European colonizers range from around 500,000 to nearly 10 million. Christopher Columbus began exploring the Caribbean for Spain in 1492, leading to Spanish-speaking settlements and missions from what are now Puerto Rico and Florida to New Mexico and California. The first Spanish colony in the present-day continental United States was Spanish Florida, chartered in 1513. After several settlements failed there due to starvation and disease, Spain's first permanent town, Saint Augustine, was founded in 1565. France established its own settlements in French Florida in 1562, but they were either abandoned (Charlesfort, 1578) or destroyed by Spanish raids (Fort Caroline, 1565). Permanent French settlements were founded much later along the Great Lakes (Fort Detroit, 1701), the Mississippi River (Saint Louis, 1764) and especially the Gulf of Mexico (New Orleans, 1718). Early European colonies also included the thriving Dutch colony of New Nederland (settled 1626, present-day New York) and the small Swedish colony of New Sweden (settled 1638 in what became Delaware). British colonization of the East Coast began with the Virginia Colony (1607) and the Plymouth Colony (Massachusetts, 1620). The Mayflower Compact in Massachusetts and the Fundamental Orders of Connecticut established precedents for local representative self-governance and constitutionalism that would develop throughout the American colonies. While European settlers in what is now the United States experienced conflicts with Native Americans, they also engaged in trade, exchanging European tools for food and animal pelts.[o] Relations ranged from close cooperation to warfare and massacres. The colonial authorities often pursued policies that forced Native Americans to adopt European lifestyles, including conversion to Christianity. Along the eastern seaboard, settlers trafficked Africans through the Atlantic slave trade, largely to provide manual labor on plantations. The original Thirteen Colonies[p] that would later found the United States were administered as possessions of the British Empire by Crown-appointed governors, though local governments held elections open to most white male property owners. The colonial population grew rapidly from Maine to Georgia, eclipsing Native American populations; by the 1770s, the natural increase of the population was such that only a small minority of Americans had been born overseas. The colonies' distance from Britain facilitated the entrenchment of self-governance, and the First Great Awakening, a series of Christian revivals, fueled colonial interest in guaranteed religious liberty. Following its victory in the French and Indian War, Britain began to assert greater control over local affairs in the Thirteen Colonies, resulting in growing political resistance. One of the primary grievances of the colonists was the denial of their rights as Englishmen, particularly the right to representation in the British government that taxed them. To demonstrate their dissatisfaction and resolve, the First Continental Congress met in 1774 and passed the Continental Association, a colonial boycott of British goods enforced by local "committees of safety" that proved effective. The British attempt to then disarm the colonists resulted in the 1775 Battles of Lexington and Concord, igniting the American Revolutionary War. At the Second Continental Congress, the colonies appointed George Washington commander-in-chief of the Continental Army, and created a committee that named Thomas Jefferson to draft the Declaration of Independence. Two days after the Second Continental Congress passed the Lee Resolution to create an independent, sovereign nation, the Declaration was adopted on July 4, 1776. The political values of the American Revolution evolved from an armed rebellion demanding reform within an empire to a revolution that created a new social and governing system founded on the defense of liberty and the protection of inalienable natural rights; sovereignty of the people; republicanism over monarchy, aristocracy, and other hereditary political power; civic virtue; and an intolerance of political corruption. The Founding Fathers of the United States, who included Washington, Jefferson, John Adams, Benjamin Franklin, Alexander Hamilton, John Jay, James Madison, Thomas Paine, and many others, were inspired by Classical, Renaissance, and Enlightenment philosophies and ideas. Though in practical effect since its drafting in 1777, the Articles of Confederation was ratified in 1781 and formally established a decentralized government that operated until 1789. After the British surrender at the siege of Yorktown in 1781, American sovereignty was internationally recognized by the Treaty of Paris (1783), through which the U.S. gained territory stretching west to the Mississippi River, north to present-day Canada, and south to Spanish Florida. The Northwest Ordinance (1787) established the precedent by which the country's territory would expand with the admission of new states, rather than the expansion of existing states. The U.S. Constitution was drafted at the 1787 Constitutional Convention to overcome the limitations of the Articles. It went into effect in 1789, creating a federal republic governed by three separate branches that together formed a system of checks and balances. George Washington was elected the country's first president under the Constitution, and the Bill of Rights was adopted in 1791 to allay skeptics' concerns about the power of the more centralized government. His resignation as commander-in-chief after the Revolutionary War and his later refusal to run for a third term as the country's first president established a precedent for the supremacy of civil authority in the United States and the peaceful transfer of power. In the late 18th century, American settlers began to expand westward in larger numbers, many with a sense of manifest destiny. The Louisiana Purchase of 1803 from France nearly doubled the territory of the United States. Lingering issues with Britain remained, leading to the War of 1812, which was fought to a draw. Spain ceded Florida and its Gulf Coast territory in 1819. The Missouri Compromise of 1820, which admitted Missouri as a slave state and Maine as a free state, attempted to balance the desire of northern states to prevent the expansion of slavery into new territories with that of southern states to extend it there. Primarily, the compromise prohibited slavery in all other lands of the Louisiana Purchase north of the 36°30′ parallel. As Americans expanded further into territory inhabited by Native Americans, the federal government implemented policies of Indian removal or assimilation. The most significant such legislation was the Indian Removal Act of 1830, a key policy of President Andrew Jackson. It resulted in the Trail of Tears (1830–1850), in which an estimated 60,000 Native Americans living east of the Mississippi River were forcibly removed and displaced to lands far to the west, causing 13,200 to 16,700 deaths along the forced march. Settler expansion as well as this influx of Indigenous peoples from the East resulted in the American Indian Wars west of the Mississippi. During the colonial period, slavery became legal in all the Thirteen colonies, but by 1770 it provided the main labor force in the large-scale, agriculture-dependent economies of the Southern Colonies from Maryland to Georgia. The practice began to be significantly questioned during the American Revolution, and spurred by an active abolitionist movement that had reemerged in the 1830s, states in the North enacted laws to prohibit slavery within their boundaries. At the same time, support for slavery had strengthened in Southern states, with widespread use of inventions such as the cotton gin (1793) having made slavery immensely profitable for Southern elites. The United States annexed the Republic of Texas in 1845, and the 1846 Oregon Treaty led to U.S. control of the present-day American Northwest. Dispute with Mexico over Texas led to the Mexican–American War (1846–1848). After the victory of the U.S., Mexico recognized U.S. sovereignty over Texas, New Mexico, and California in the 1848 Mexican Cession; the cession's lands also included the future states of Nevada, Colorado and Utah. The California gold rush of 1848–1849 spurred a huge migration of white settlers to the Pacific coast, leading to even more confrontations with Native populations. One of the most violent, the California genocide of thousands of Native inhabitants, lasted into the mid-1870s. Additional western territories and states were created. Throughout the 1850s, the sectional conflict regarding slavery was further inflamed by national legislation in the U.S. Congress and decisions of the Supreme Court. In Congress, the Fugitive Slave Act of 1850 mandated the forcible return to their owners in the South of slaves taking refuge in non-slave states, while the Kansas–Nebraska Act of 1854 effectively gutted the anti-slavery requirements of the Missouri Compromise. In its Dred Scott decision of 1857, the Supreme Court ruled against a slave brought into non-slave territory, simultaneously declaring the entire Missouri Compromise to be unconstitutional. These and other events exacerbated tensions between North and South that would culminate in the American Civil War (1861–1865). Beginning with South Carolina, 11 slave-state governments voted to secede from the United States in 1861, joining to create the Confederate States of America. All other state governments remained loyal to the Union.[q] War broke out in April 1861 after the Confederacy bombarded Fort Sumter. Following the Emancipation Proclamation on January 1, 1863, many freed slaves joined the Union army. The war began to turn in the Union's favor following the 1863 Siege of Vicksburg and Battle of Gettysburg, and the Confederates surrendered in 1865 after the Union's victory in the Battle of Appomattox Court House. Efforts toward reconstruction in the secessionist South had begun as early as 1862, but it was only after President Lincoln's assassination that the three Reconstruction Amendments to the Constitution were ratified to protect civil rights. The amendments codified nationally the abolition of slavery and involuntary servitude except as punishment for crimes, promised equal protection under the law for all persons, and prohibited discrimination on the basis of race or previous enslavement. As a result, African Americans took an active political role in ex-Confederate states in the decade following the Civil War. The former Confederate states were readmitted to the Union, beginning with Tennessee in 1866 and ending with Georgia in 1870. National infrastructure, including transcontinental telegraph and railroads, spurred growth in the American frontier. This was accelerated by the Homestead Acts, through which nearly 10 percent of the total land area of the United States was given away free to some 1.6 million homesteaders. From 1865 through 1917, an unprecedented stream of immigrants arrived in the United States, including 24.4 million from Europe. Most came through the Port of New York, as New York City and other large cities on the East Coast became home to large Jewish, Irish, and Italian populations. Many Northern Europeans as well as significant numbers of Germans and other Central Europeans moved to the Midwest. At the same time, about one million French Canadians migrated from Quebec to New England. During the Great Migration, millions of African Americans left the rural South for urban areas in the North. Alaska was purchased from Russia in 1867. The Compromise of 1877 is generally considered the end of the Reconstruction era, as it resolved the electoral crisis following the 1876 presidential election and led President Rutherford B. Hayes to reduce the role of federal troops in the South. Immediately, the Redeemers began evicting the Carpetbaggers and quickly regained local control of Southern politics in the name of white supremacy. African Americans endured a period of heightened, overt racism following Reconstruction, a time often considered the nadir of American race relations. A series of Supreme Court decisions, including Plessy v. Ferguson, emptied the Fourteenth and Fifteenth Amendments of their force, allowing Jim Crow laws in the South to remain unchecked, sundown towns in the Midwest, and segregation in communities across the country, which would be reinforced in part by the policy of redlining later adopted by the federal Home Owners' Loan Corporation. An explosion of technological advancement, accompanied by the exploitation of cheap immigrant labor, led to rapid economic expansion during the Gilded Age of the late 19th century. It continued into the early 20th, when the United States already outpaced the economies of Britain, France, and Germany combined. This fostered the amassing of power by a few prominent industrialists, largely by their formation of trusts and monopolies to prevent competition. Tycoons led the nation's expansion in the railroad, petroleum, and steel industries. The United States emerged as a pioneer of the automotive industry. These changes resulted in significant increases in economic inequality, slum conditions, and social unrest, creating the environment for labor unions and socialist movements to begin to flourish. This period eventually ended with the advent of the Progressive Era, which was characterized by significant economic and social reforms. Pro-American elements in Hawaii overthrew the Hawaiian monarchy; the islands were annexed in 1898. That same year, Puerto Rico, the Philippines, and Guam were ceded to the U.S. by Spain after the latter's defeat in the Spanish–American War. (The Philippines was granted full independence from the U.S. on July 4, 1946, following World War II. Puerto Rico and Guam have remained U.S. territories.) American Samoa was acquired by the United States in 1900 after the Second Samoan Civil War. The U.S. Virgin Islands were purchased from Denmark in 1917. The United States entered World War I alongside the Allies in 1917 helping to turn the tide against the Central Powers. In 1920, a constitutional amendment granted nationwide women's suffrage. During the 1920s and 1930s, radio for mass communication and early television transformed communications nationwide. The Wall Street Crash of 1929 triggered the Great Depression, to which President Franklin D. Roosevelt responded with the New Deal plan of "reform, recovery and relief", a series of unprecedented and sweeping recovery programs and employment relief projects combined with financial reforms and regulations. Initially neutral during World War II, the U.S. began supplying war materiel to the Allies of World War II in March 1941 and entered the war in December after Japan's attack on Pearl Harbor. Agreeing to a "Europe first" policy, the U.S. concentrated its wartime efforts on Japan's allies Italy and Germany until their final defeat in May 1945. The U.S. developed the first nuclear weapons and used them against the Japanese cities of Hiroshima and Nagasaki in August 1945, ending the war. The United States was one of the "Four Policemen" who met to plan the post-war world, alongside the United Kingdom, the Soviet Union, and China. The U.S. emerged relatively unscathed from the war, with even greater economic power and international political influence. The end of World War II in 1945 left the U.S. and the Soviet Union as superpowers, each with its own political, military, and economic sphere of influence. Geopolitical tensions between the two superpowers soon led to the Cold War. The U.S. implemented a policy of containment intended to limit the Soviet Union's sphere of influence; engaged in regime change against governments perceived to be aligned with the Soviets; and prevailed in the Space Race, which culminated with the first crewed Moon landing in 1969. Domestically, the U.S. experienced economic growth, urbanization, and population growth following World War II. The civil rights movement emerged, with Martin Luther King Jr. becoming a prominent leader in the early 1960s. The Great Society plan of President Lyndon B. Johnson's administration resulted in groundbreaking and broad-reaching laws, policies and a constitutional amendment to counteract some of the worst effects of lingering institutional racism. The counterculture movement in the U.S. brought significant social changes, including the liberalization of attitudes toward recreational drug use and sexuality. It also encouraged open defiance of the military draft (leading to the end of conscription in 1973) and wide opposition to U.S. intervention in Vietnam, with the U.S. totally withdrawing in 1975. A societal shift in the roles of women was significantly responsible for the large increase in female paid labor participation starting in the 1970s, and by 1985 the majority of American women aged 16 and older were employed. The Fall of Communism and the dissolution of the Soviet Union from 1989 to 1991 marked the end of the Cold War and left the United States as the world's sole superpower. This cemented the United States' global influence, reinforcing the concept of the "American Century" as the U.S. dominated international political, cultural, economic, and military affairs. The 1990s saw the longest recorded economic expansion in American history, a dramatic decline in U.S. crime rates, and advances in technology. Throughout this decade, technological innovations such as the World Wide Web, the evolution of the Pentium microprocessor in accordance with Moore's law, rechargeable lithium-ion batteries, the first gene therapy trial, and cloning either emerged in the U.S. or were improved upon there. The Human Genome Project was formally launched in 1990, while Nasdaq became the first stock market in the United States to trade online in 1998. In the Gulf War of 1991, an American-led international coalition of states expelled an Iraqi invasion force that had occupied neighboring Kuwait. The September 11 attacks on the United States in 2001 by the pan-Islamist militant organization al-Qaeda led to the war on terror and subsequent military interventions in Afghanistan and in Iraq. The U.S. housing bubble culminated in 2007 with the Great Recession, the largest economic contraction since the Great Depression. In the 2010s and early 2020s, the United States has experienced increased political polarization and democratic backsliding. The country's polarization was violently reflected in the January 2021 Capitol attack, when a mob of insurrectionists entered the U.S. Capitol and sought to prevent the peaceful transfer of power in an attempted self-coup d'état. Geography The United States is the world's third-largest country by total area behind Russia and Canada.[c] The 48 contiguous states and the District of Columbia have a combined area of 3,119,885 square miles (8,080,470 km2). In 2021, the United States had 8% of the Earth's permanent meadows and pastures and 10% of its cropland. Starting in the east, the coastal plain of the Atlantic seaboard gives way to inland forests and rolling hills in the Piedmont plateau region. The Appalachian Mountains and the Adirondack Massif separate the East Coast from the Great Lakes and the grasslands of the Midwest. The Mississippi River System, the world's fourth-longest river system, runs predominantly north–south through the center of the country. The flat and fertile prairie of the Great Plains stretches to the west, interrupted by a highland region in the southeast. The Rocky Mountains, west of the Great Plains, extend north to south across the country, peaking at over 14,000 feet (4,300 m) in Colorado. The supervolcano underlying Yellowstone National Park in the Rocky Mountains, the Yellowstone Caldera, is the continent's largest volcanic feature. Farther west are the rocky Great Basin and the Chihuahuan, Sonoran, and Mojave deserts. In the northwest corner of Arizona, carved by the Colorado River, is the Grand Canyon, a steep-sided canyon and popular tourist destination known for its overwhelming visual size and intricate, colorful landscape. The Cascade and Sierra Nevada mountain ranges run close to the Pacific coast. The lowest and highest points in the contiguous United States are in the State of California, about 84 miles (135 km) apart. At an elevation of 20,310 feet (6,190.5 m), Alaska's Denali (also called Mount McKinley) is the highest peak in the country and on the continent. Active volcanoes in the U.S. are common throughout Alaska's Alexander and Aleutian Islands. Located entirely outside North America, the archipelago of Hawaii consists of volcanic islands, physiographically and ethnologically part of the Polynesian subregion of Oceania. In addition to its total land area, the United States has one of the world's largest marine exclusive economic zones spanning approximately 4.5 million square miles (11.7 million km2) of ocean. With its large size and geographic variety, the United States includes most climate types. East of the 100th meridian, the climate ranges from humid continental in the north to humid subtropical in the south. The western Great Plains are semi-arid. Many mountainous areas of the American West have an alpine climate. The climate is arid in the Southwest, Mediterranean in coastal California, and oceanic in coastal Oregon, Washington, and southern Alaska. Most of Alaska is subarctic or polar. Hawaii, the southern tip of Florida and U.S. territories in the Caribbean and Pacific are tropical. The United States receives more high-impact extreme weather incidents than any other country. States bordering the Gulf of Mexico are prone to hurricanes, and most of the world's tornadoes occur in the country, mainly in Tornado Alley. Due to climate change in the country, extreme weather has become more frequent in the U.S. in the 21st century, with three times the number of reported heat waves compared to the 1960s. Since the 1990s, droughts in the American Southwest have become more persistent and more severe. The regions considered as the most attractive to the population are the most vulnerable. The U.S. is one of 17 megadiverse countries containing large numbers of endemic species: about 17,000 species of vascular plants occur in the contiguous United States and Alaska, and over 1,800 species of flowering plants are found in Hawaii, few of which occur on the mainland. The United States is home to 428 mammal species, 784 birds, 311 reptiles, 295 amphibians, and around 91,000 insect species. There are 63 national parks, and hundreds of other federally managed monuments, forests, and wilderness areas, administered by the National Park Service and other agencies. About 28% of the country's land is publicly owned and federally managed, primarily in the Western States. Most of this land is protected, though some is leased for commercial use, and less than one percent is used for military purposes. Environmental issues in the United States include debates on non-renewable resources and nuclear energy, air and water pollution, biodiversity, logging and deforestation, and climate change. The U.S. Environmental Protection Agency (EPA) is the federal agency charged with addressing most environmental-related issues. The idea of wilderness has shaped the management of public lands since 1964, with the Wilderness Act. The Endangered Species Act of 1973 provides a way to protect threatened and endangered species and their habitats. The United States Fish and Wildlife Service implements and enforces the Act. In 2024, the U.S. ranked 35th among 180 countries in the Environmental Performance Index. Government and politics The United States is a federal republic of 50 states and a federal capital district, Washington, D.C. The U.S. asserts sovereignty over five unincorporated territories and several uninhabited island possessions. It is the world's oldest surviving federation, and its presidential system of federal government has been adopted, in whole or in part, by many newly independent states worldwide following their decolonization. The Constitution of the United States serves as the country's supreme legal document. Most scholars describe the United States as a liberal democracy.[r] Composed of three branches, all headquartered in Washington, D.C., the federal government is the national government of the United States. The U.S. Constitution establishes a separation of powers intended to provide a system of checks and balances to prevent any of the three branches from becoming supreme. The three-branch system is known as the presidential system, in contrast to the parliamentary system where the executive is part of the legislative body. Many countries around the world adopted this aspect of the 1789 Constitution of the United States, especially in the postcolonial Americas. In the U.S. federal system, sovereign powers are shared between three levels of government specified in the Constitution: the federal government, the states, and Indian tribes. The U.S. also asserts sovereignty over five permanently inhabited territories: American Samoa, Guam, the Northern Mariana Islands, Puerto Rico, and the U.S. Virgin Islands. Residents of the 50 states are governed by their elected state government, under state constitutions compatible with the national constitution, and by elected local governments that are administrative divisions of a state. States are subdivided into counties or county equivalents, and (except for Hawaii) further divided into municipalities, each administered by elected representatives. The District of Columbia is a federal district containing the U.S. capital, Washington, D.C. The federal district is an administrative division of the federal government. Indian country is made up of 574 federally recognized tribes and 326 Indian reservations. They hold a government-to-government relationship with the U.S. federal government in Washington and are legally defined as domestic dependent nations with inherent tribal sovereignty rights. In addition to the five major territories, the U.S. also asserts sovereignty over the United States Minor Outlying Islands in the Pacific Ocean and the Caribbean. The seven undisputed islands without permanent populations are Baker Island, Howland Island, Jarvis Island, Johnston Atoll, Kingman Reef, Midway Atoll, and Palmyra Atoll. U.S. sovereignty over the unpopulated Bajo Nuevo Bank, Navassa Island, Serranilla Bank, and Wake Island is disputed. The Constitution is silent on political parties. However, they developed independently in the 18th century with the Federalist and Anti-Federalist parties. Since then, the United States has operated as a de facto two-party system, though the parties have changed over time. Since the mid-19th century, the two main national parties have been the Democratic Party and the Republican Party. The former is perceived as relatively liberal in its political platform while the latter is perceived as relatively conservative in its platform. The United States has an established structure of foreign relations, with the world's second-largest diplomatic corps as of 2024[update]. It is a permanent member of the United Nations Security Council and home to the United Nations headquarters. The United States is a member of the G7, G20, and OECD intergovernmental organizations. Almost all countries have embassies and many have consulates (official representatives) in the country. Likewise, nearly all countries host formal diplomatic missions with the United States, except Iran, North Korea, and Bhutan. Though Taiwan does not have formal diplomatic relations with the U.S., it maintains close unofficial relations. The United States regularly supplies Taiwan with military equipment to deter potential Chinese aggression. Its geopolitical attention also turned to the Indo-Pacific when the United States joined the Quadrilateral Security Dialogue with Australia, India, and Japan. The United States has a "Special Relationship" with the United Kingdom and strong ties with Canada, Australia, New Zealand, the Philippines, Japan, South Korea, Israel, and several European Union countries such as France, Italy, Germany, Spain, and Poland. The U.S. works closely with its NATO allies on military and national security issues, and with countries in the Americas through the Organization of American States and the United States–Mexico–Canada Free Trade Agreement. The U.S. exercises full international defense authority and responsibility for Micronesia, the Marshall Islands, and Palau through the Compact of Free Association. It has increasingly conducted strategic cooperation with India, while its ties with China have steadily deteriorated. Beginning in 2014, the U.S. had become a key ally of Ukraine. After Donald Trump was elected U.S. president in 2024, he sought to negotiate an end to the Russo-Ukrainian War. He paused all military aid to Ukraine in March 2025, although the aid resumed later. Trump also ended U.S. intelligence sharing with the country, but this too was eventually restored. The president is the commander-in-chief of the United States Armed Forces and appoints its leaders, the secretary of defense and the Joint Chiefs of Staff. The Department of Defense, headquartered at the Pentagon near Washington, D.C., administers five of the six service branches, which are made up of the U.S. Army, Marine Corps, Navy, Air Force, and Space Force. The Coast Guard is administered by the Department of Homeland Security in peacetime and can be transferred to the Department of the Navy in wartime. Total strength of the entire military is about 1.3 million active duty with an additional 400,000 in reserve. The United States spent $997 billion on its military in 2024, which is by far the largest amount of any country, making up 37% of global military spending and accounting for 3.4% of the country's GDP. The U.S. possesses 42% of the world's nuclear weapons—the second-largest stockpile after that of Russia. The U.S. military is widely regarded as the most powerful and advanced in the world. The United States has the third-largest combined armed forces in the world, behind the Chinese People's Liberation Army and Indian Armed Forces. The U.S. military operates about 800 bases and facilities abroad, and maintains deployments greater than 100 active duty personnel in 25 foreign countries. The United States has engaged in over 400 military interventions since its founding in 1776, with over half of these occurring between 1950 and 2019 and 25% occurring in the post-Cold War era. State defense forces (SDFs) are military units that operate under the sole authority of a state government. SDFs are authorized by state and federal law but are under the command of the state's governor. By contrast, the 54 U.S. National Guard organizations[t] fall under the dual control of state or territorial governments and the federal government; their units can also become federalized entities, but SDFs cannot be federalized. The National Guard personnel of a state or territory can be federalized by the president under the National Defense Act Amendments of 1933; this legislation created the Guard and provides for the integration of Army National Guard and Air National Guard units and personnel into the U.S. Army and (since 1947) the U.S. Air Force. The total number of National Guard members is about 430,000, while the estimated combined strength of SDFs is less than 10,000. There are about 18,000 U.S. police agencies from local to national level in the United States. Law in the United States is mainly enforced by local police departments and sheriff departments in their municipal or county jurisdictions. The state police departments have authority in their respective state, and federal agencies such as the Federal Bureau of Investigation (FBI) and the U.S. Marshals Service have national jurisdiction and specialized duties, such as protecting civil rights, national security, enforcing U.S. federal courts' rulings and federal laws, and interstate criminal activity. State courts conduct almost all civil and criminal trials, while federal courts adjudicate the much smaller number of civil and criminal cases that relate to federal law. There is no unified "criminal justice system" in the United States. The American prison system is largely heterogenous, with thousands of relatively independent systems operating across federal, state, local, and tribal levels. In 2025, "these systems hold nearly 2 million people in 1,566 state prisons, 98 federal prisons, 3,116 local jails, 1,277 juvenile correctional facilities, 133 immigration detention facilities, and 80 Indian country jails, as well as in military prisons, civil commitment centers, state psychiatric hospitals, and prisons in the U.S. territories." Despite disparate systems of confinement, four main institutions dominate: federal prisons, state prisons, local jails, and juvenile correctional facilities. Federal prisons are run by the Federal Bureau of Prisons and hold pretrial detainees as well as people who have been convicted of federal crimes. State prisons, run by the department of corrections of each state, hold people sentenced and serving prison time (usually longer than one year) for felony offenses. Local jails are county or municipal facilities that incarcerate defendants prior to trial; they also hold those serving short sentences (typically under a year). Juvenile correctional facilities are operated by local or state governments and serve as longer-term placements for any minor adjudicated as delinquent and ordered by a judge to be confined. In January 2023, the United States had the sixth-highest per capita incarceration rate in the world—531 people per 100,000 inhabitants—and the largest prison and jail population in the world, with more than 1.9 million people incarcerated. An analysis of the World Health Organization Mortality Database from 2010 showed U.S. homicide rates "were 7 times higher than in other high-income countries, driven by a gun homicide rate that was 25 times higher". Economy The U.S. has a highly developed mixed economy that has been the world's largest nominally since about 1890. Its 2024 gross domestic product (GDP)[e] of more than $29 trillion constituted over 25% of nominal global economic output, or 15% at purchasing power parity (PPP). From 1983 to 2008, U.S. real compounded annual GDP growth was 3.3%, compared to a 2.3% weighted average for the rest of the G7. The country ranks first in the world by nominal GDP, second when adjusted for purchasing power parities (PPP), and ninth by PPP-adjusted GDP per capita. In February 2024, the total U.S. federal government debt was $34.4 trillion. Of the world's 500 largest companies by revenue, 138 were headquartered in the U.S. in 2025, the highest number of any country. The U.S. dollar is the currency most used in international transactions and the world's foremost reserve currency, backed by the country's dominant economy, its military, the petrodollar system, its large U.S. treasuries market, and its linked eurodollar. Several countries use it as their official currency, and in others it is the de facto currency. The U.S. has free trade agreements with several countries, including the USMCA. Although the United States has reached a post-industrial level of economic development and is often described as having a service economy, it remains a major industrial power; in 2024, the U.S. manufacturing sector was the world's second-largest by value output after China's. New York City is the world's principal financial center, and its metropolitan area is the world's largest metropolitan economy. The New York Stock Exchange and Nasdaq, both located in New York City, are the world's two largest stock exchanges by market capitalization and trade volume. The United States is at the forefront of technological advancement and innovation in many economic fields, especially in artificial intelligence; electronics and computers; pharmaceuticals; and medical, aerospace and military equipment. The country's economy is fueled by abundant natural resources, a well-developed infrastructure, and high productivity. The largest trading partners of the United States are the European Union, Mexico, Canada, China, Japan, South Korea, the United Kingdom, Vietnam, India, and Taiwan. The United States is the world's largest importer and second-largest exporter.[u] It is by far the world's largest exporter of services. Americans have the highest average household and employee income among OECD member states, and the fourth-highest median household income in 2023, up from sixth-highest in 2013. With personal consumption expenditures of over $18.5 trillion in 2023, the U.S. has a heavily consumer-driven economy and is the world's largest consumer market. The U.S. ranked first in the number of dollar billionaires and millionaires in 2023, with 735 billionaires and nearly 22 million millionaires. Wealth in the United States is highly concentrated; in 2011, the richest 10% of the adult population owned 72% of the country's household wealth, while the bottom 50% owned just 2%. U.S. wealth inequality increased substantially since the late 1980s, and income inequality in the U.S. reached a record high in 2019. In 2024, the country had some of the highest wealth and income inequality levels among OECD countries. Since the 1970s, there has been a decoupling of U.S. wage gains from worker productivity. In 2016, the top fifth of earners took home more than half of all income, giving the U.S. one of the widest income distributions among OECD countries. There were about 771,480 homeless persons in the U.S. in 2024. In 2022, 6.4 million children experienced food insecurity. Feeding America estimates that around one in five, or approximately 13 million, children experience hunger in the U.S. and do not know where or when they will get their next meal. Also in 2022, about 37.9 million people, or 11.5% of the U.S. population, were living in poverty. The United States has a smaller welfare state and redistributes less income through government action than most other high-income countries. It is the only advanced economy that does not guarantee its workers paid vacation nationally and one of a few countries in the world without federal paid family leave as a legal right. The United States has a higher percentage of low-income workers than almost any other developed country, largely because of a weak collective bargaining system and lack of government support for at-risk workers. The United States has been a leader in technological innovation since the late 19th century and scientific research since the mid-20th century. Methods for producing interchangeable parts and the establishment of a machine tool industry enabled the large-scale manufacturing of U.S. consumer products in the late 19th century. By the early 20th century, factory electrification, the introduction of the assembly line, and other labor-saving techniques created the system of mass production. In the 21st century, the United States continues to be one of the world's foremost scientific powers, though China has emerged as a major competitor in many fields. The U.S. has the highest research and development expenditures of any country and ranks ninth as a percentage of GDP. In 2022, the United States was (after China) the country with the second-highest number of published scientific papers. In 2021, the U.S. ranked second (also after China) by the number of patent applications, and third by trademark and industrial design applications (after China and Germany), according to World Intellectual Property Indicators. In 2025 the United States ranked third (after Switzerland and Sweden) in the Global Innovation Index. The United States is considered to be a world leader in the development of artificial intelligence technology. In 2023, the United States was ranked the second most technologically advanced country in the world (after South Korea) by Global Finance magazine. The United States has maintained a space program since the late 1950s, beginning with the establishment of the National Aeronautics and Space Administration (NASA) in 1958. NASA's Apollo program (1961–1972) achieved the first crewed Moon landing with the 1969 Apollo 11 mission; it remains one of the agency's most significant milestones. Other major endeavors by NASA include the Space Shuttle program (1981–2011), the Voyager program (1972–present), the Hubble and James Webb space telescopes (launched in 1990 and 2021, respectively), and the multi-mission Mars Exploration Program (Spirit and Opportunity, Curiosity, and Perseverance). NASA is one of five agencies collaborating on the International Space Station (ISS); U.S. contributions to the ISS include several modules, including Destiny (2001), Harmony (2007), and Tranquility (2010), as well as ongoing logistical and operational support. The United States private sector dominates the global commercial spaceflight industry. Prominent American spaceflight contractors include Blue Origin, Boeing, Lockheed Martin, Northrop Grumman, and SpaceX. NASA programs such as the Commercial Crew Program, Commercial Resupply Services, Commercial Lunar Payload Services, and NextSTEP have facilitated growing private-sector involvement in American spaceflight. In 2023, the United States received approximately 84% of its energy from fossil fuel, and its largest source of energy was petroleum (38%), followed by natural gas (36%), renewable sources (9%), coal (9%), and nuclear power (9%). In 2022, the United States constituted about 4% of the world's population, but consumed around 16% of the world's energy. The U.S. ranks as the second-highest emitter of greenhouse gases behind China. The U.S. is the world's largest producer of nuclear power, generating around 30% of the world's nuclear electricity. It also has the highest number of nuclear power reactors of any country. From 2024, the U.S. plans to triple its nuclear power capacity by 2050. The United States' 4 million miles (6.4 million kilometers) of road network, owned almost entirely by state and local governments, is the longest in the world. The extensive Interstate Highway System that connects all major U.S. cities is funded mostly by the federal government but maintained by state departments of transportation. The system is further extended by state highways and some private toll roads. The U.S. is among the top ten countries with the highest vehicle ownership per capita (850 vehicles per 1,000 people) in 2022. A 2022 study found that 76% of U.S. commuters drive alone and 14% ride a bicycle, including bike owners and users of bike-sharing networks. About 11% use some form of public transportation. Public transportation in the United States is well developed in the largest urban areas, notably New York City, Washington, D.C., Boston, Philadelphia, Chicago, and San Francisco; otherwise, coverage is generally less extensive than in most other developed countries. The U.S. also has many relatively car-dependent localities. Long-distance intercity travel is provided primarily by airlines, but travel by rail is more common along the Northeast Corridor, the only high-speed rail in the U.S. that meets international standards. Amtrak, the country's government-sponsored national passenger rail company, has a relatively sparse network compared to that of Western European countries. Service is concentrated in the Northeast, California, the Midwest, the Pacific Northwest, and Virginia/Southeast. The United States has an extensive air transportation network. U.S. civilian airlines are all privately owned. The three largest airlines in the world, by total number of passengers carried, are U.S.-based; American Airlines became the global leader after its 2013 merger with US Airways. Of the 50 busiest airports in the world, 16 are in the United States, as well as five of the top 10. The world's busiest airport by passenger volume is Hartsfield–Jackson Atlanta International in Atlanta, Georgia. In 2022, most of the 19,969 U.S. airports were owned and operated by local government authorities, and there are also some private airports. Some 5,193 are designated as "public use", including for general aviation. The Transportation Security Administration (TSA) has provided security at most major airports since 2001. The country's rail transport network, the longest in the world at 182,412.3 mi (293,564.2 km), handles mostly freight (in contrast to more passenger-centered rail in Europe). Because they are often privately owned operations, U.S. railroads lag behind those of the rest of the world in terms of electrification. The country's inland waterways are the world's fifth-longest, totaling 25,482 mi (41,009 km). They are used extensively for freight, recreation, and a small amount of passenger traffic. Of the world's 50 busiest container ports, four are located in the United States, with the busiest in the country being the Port of Los Angeles. Demographics The U.S. Census Bureau reported 331,449,281 residents on April 1, 2020,[v] making the United States the third-most-populous country in the world, after India and China. The Census Bureau's official 2025 population estimate was 341,784,857, an increase of 3.1% since the 2020 census. According to the Bureau's U.S. Population Clock, on July 1, 2024, the U.S. population had a net gain of one person every 16 seconds, or about 5400 people per day. In 2023, 51% of Americans age 15 and over were married, 6% were widowed, 10% were divorced, and 34% had never been married. In 2023, the total fertility rate for the U.S. stood at 1.6 children per woman, and, at 23%, it had the world's highest rate of children living in single-parent households in 2019. Most Americans live in the suburbs of major metropolitan areas. The United States has a diverse population; 37 ancestry groups have more than one million members. White Americans with ancestry from Europe, the Middle East, or North Africa form the largest racial and ethnic group at 57.8% of the United States population. Hispanic and Latino Americans form the second-largest group and are 18.7% of the United States population. African Americans constitute the country's third-largest ancestry group and are 12.1% of the total U.S. population. Asian Americans are the country's fourth-largest group, composing 5.9% of the United States population. The country's 3.7 million Native Americans account for about 1%, and some 574 native tribes are recognized by the federal government. In 2024, the median age of the United States population was 39.1 years. While many languages and dialects are spoken in the United States, English is by far the most commonly spoken and written. De facto, English is the official language of the United States, and in 2025, Executive Order 14224 declared English official. However, the U.S. has never had a de jure official language, as Congress has never passed a law to designate English as official for all three federal branches. Some laws, such as U.S. naturalization requirements, nonetheless standardize English. Twenty-eight states and the United States Virgin Islands have laws that designate English as the sole official language; 19 states and the District of Columbia have no official language. Three states and four U.S. territories have recognized local or indigenous languages in addition to English: Hawaii (Hawaiian), Alaska (twenty Native languages),[w] South Dakota (Sioux), American Samoa (Samoan), Puerto Rico (Spanish), Guam (Chamorro), and the Northern Mariana Islands (Carolinian and Chamorro). In total, 169 Native American languages are spoken in the United States. In Puerto Rico, Spanish is more widely spoken than English. According to the American Community Survey (2020), some 245.4 million people in the U.S. age five and older spoke only English at home. About 41.2 million spoke Spanish at home, making it the second most commonly used language. Other languages spoken at home by one million people or more include Chinese (3.40 million), Tagalog (1.71 million), Vietnamese (1.52 million), Arabic (1.39 million), French (1.18 million), Korean (1.07 million), and Russian (1.04 million). German, spoken by 1 million people at home in 2010, fell to 857,000 total speakers in 2020. America's immigrant population is by far the world's largest in absolute terms. In 2022, there were 87.7 million immigrants and U.S.-born children of immigrants in the United States, accounting for nearly 27% of the overall U.S. population. In 2017, out of the U.S. foreign-born population, some 45% (20.7 million) were naturalized citizens, 27% (12.3 million) were lawful permanent residents, 6% (2.2 million) were temporary lawful residents, and 23% (10.5 million) were unauthorized immigrants. In 2019, the top countries of origin for immigrants were Mexico (24% of immigrants), India (6%), China (5%), the Philippines (4.5%), and El Salvador (3%). In fiscal year 2022, over one million immigrants (most of whom entered through family reunification) were granted legal residence. The undocumented immigrant population in the U.S. reached a record high of 14 million in 2023. The First Amendment guarantees the free exercise of religion in the country and forbids Congress from passing laws respecting its establishment. Religious practice is widespread, among the most diverse in the world, and profoundly vibrant. The country has the world's largest Christian population, which includes the fourth-largest population of Catholics. Other notable faiths include Judaism, Buddhism, Hinduism, Islam, New Age, and Native American religions. Religious practice varies significantly by region. "Ceremonial deism" is common in American culture. The overwhelming majority of Americans believe in a higher power or spiritual force, engage in spiritual practices such as prayer, and consider themselves religious or spiritual. In the Southern United States' "Bible Belt", evangelical Protestantism plays a significant role culturally; New England and the Western United States tend to be more secular. Mormonism, a Restorationist movement founded in the U.S. in 1847, is the predominant religion in Utah and a major religion in Idaho. About 82% of Americans live in metropolitan areas, particularly in suburbs; about half of those reside in cities with populations over 50,000. In 2022, 333 incorporated municipalities had populations over 100,000, nine cities had more than one million residents, and four cities—New York City, Los Angeles, Chicago, and Houston—had populations exceeding two million. Many U.S. metropolitan populations are growing rapidly, particularly in the South and West. According to the Centers for Disease Control and Prevention (CDC), average U.S. life expectancy at birth reached 79.0 years in 2024, its highest recorded level. This was an increase of 0.6 years over 2023. The CDC attributed the improvement to a significant fall in the number of fatal drug overdoses in the country, noting that "heart disease continues to be the leading cause of death in the United States, followed by cancer and unintentional injuries." In 2024, life expectancy at birth for American men rose to 76.5 years (+0.7 years compared to 2023), while life expectancy for women was 81.4 years (+0.3 years). Starting in 1998, life expectancy in the U.S. fell behind that of other wealthy industrialized countries, and Americans' "health disadvantage" gap has been increasing ever since. The Commonwealth Fund reported in 2020 that the U.S. had the highest suicide rate among high-income countries. Approximately one-third of the U.S. adult population is obese and another third is overweight. The U.S. healthcare system far outspends that of any other country, measured both in per capita spending and as a percentage of GDP, but attains worse healthcare outcomes when compared to peer countries for reasons that are debated. The United States is the only developed country without a system of universal healthcare, and a significant proportion of the population that does not carry health insurance. Government-funded healthcare coverage for the poor (Medicaid) and for those age 65 and older (Medicare) is available to Americans who meet the programs' income or age qualifications. In 2010, then-President Obama passed the Patient Protection and Affordable Care Act.[x] Abortion in the United States is not federally protected, and is illegal or restricted in 17 states. American primary and secondary education, known in the U.S. as K–12 ("kindergarten through 12th grade"), is decentralized. School systems are operated by state, territorial, and sometimes municipal governments and regulated by the U.S. Department of Education. In general, children are required to attend school or an approved homeschool from the age of five or six (kindergarten or first grade) until they are 18 years old. This often brings students through the 12th grade, the final year of a U.S. high school, but some states and territories allow them to leave school earlier, at age 16 or 17. The U.S. spends more on education per student than any other country, an average of $18,614 per year per public elementary and secondary school student in 2020–2021. Among Americans age 25 and older, 92.2% graduated from high school, 62.7% attended some college, 37.7% earned a bachelor's degree, and 14.2% earned a graduate degree. The U.S. literacy rate is near-universal. The U.S. has produced the most Nobel Prize winners of any country, with 411 (having won 413 awards). U.S. tertiary or higher education has earned a global reputation. Many of the world's top universities, as listed by various ranking organizations, are in the United States, including 19 of the top 25. American higher education is dominated by state university systems, although the country's many private universities and colleges enroll about 20% of all American students. Local community colleges generally offer open admissions, lower tuition, and coursework leading to a two-year associate degree or a non-degree certificate. As for public expenditures on higher education, the U.S. spends more per student than the OECD average, and Americans spend more than all nations in combined public and private spending. Colleges and universities directly funded by the federal government do not charge tuition and are limited to military personnel and government employees, including: the U.S. service academies, the Naval Postgraduate School, and military staff colleges. Despite some student loan forgiveness programs in place, student loan debt increased by 102% between 2010 and 2020, and exceeded $1.7 trillion in 2022. Culture and society The United States is home to a wide variety of ethnic groups, traditions, and customs. The country has been described as having the values of individualism and personal autonomy, as well as a strong work ethic and competitiveness. Voluntary altruism towards others also plays a major role; according to a 2016 study by the Charities Aid Foundation, Americans donated 1.44% of total GDP to charity—the highest rate in the world by a large margin. Americans have traditionally been characterized by a unifying political belief in an "American Creed" emphasizing consent of the governed, liberty, equality under the law, democracy, social equality, property rights, and a preference for limited government. The U.S. has acquired significant hard and soft power through its diplomatic influence, economic power, military alliances, and cultural exports such as American movies, music, video games, sports, and food. The influence that the United States exerts on other countries through soft power is referred to as Americanization. Nearly all present Americans or their ancestors came from Europe, Africa, or Asia (the "Old World") within the past five centuries. Mainstream American culture is a Western culture largely derived from the traditions of European immigrants with influences from many other sources, such as traditions brought by slaves from Africa. More recent immigration from Asia and especially Latin America has added to a cultural mix that has been described as a homogenizing melting pot, and a heterogeneous salad bowl, with immigrants contributing to, and often assimilating into, mainstream American culture. Under the First Amendment to the Constitution, the United States is considered to have the strongest protections of free speech of any country. Flag desecration, hate speech, blasphemy, and lese majesty are all forms of protected expression. A 2016 Pew Research Center poll found that Americans were the most supportive of free expression of any polity measured. Additionally, they are the "most supportive of freedom of the press and the right to use the Internet without government censorship". The U.S. is a socially progressive country with permissive attitudes surrounding human sexuality. LGBTQ rights in the United States are among the most advanced by global standards. The American Dream, or the perception that Americans enjoy high levels of social mobility, plays a key role in attracting immigrants. Whether this perception is accurate has been a topic of debate. While mainstream culture holds that the United States is a classless society, scholars identify significant differences between the country's social classes, affecting socialization, language, and values. Americans tend to greatly value socioeconomic achievement, but being ordinary or average is promoted by some as a noble condition as well. The National Foundation on the Arts and the Humanities is an agency of the United States federal government that was established in 1965 with the purpose to "develop and promote a broadly conceived national policy of support for the humanities and the arts in the United States, and for institutions which preserve the cultural heritage of the United States." It is composed of four sub-agencies: Colonial American authors were influenced by John Locke and other Enlightenment philosophers. The American Revolutionary Period (1765–1783) is notable for the political writings of Benjamin Franklin, Alexander Hamilton, Thomas Paine, and Thomas Jefferson. Shortly before and after the Revolutionary War, the newspaper rose to prominence, filling a demand for anti-British national literature. An early novel is William Hill Brown's The Power of Sympathy, published in 1791. Writer and critic John Neal in the early- to mid-19th century helped advance America toward a unique literature and culture by criticizing predecessors such as Washington Irving for imitating their British counterparts, and by influencing writers such as Edgar Allan Poe, who took American poetry and short fiction in new directions. Ralph Waldo Emerson and Margaret Fuller pioneered the influential Transcendentalism movement; Henry David Thoreau, author of Walden, was influenced by this movement. The conflict surrounding abolitionism inspired writers, like Harriet Beecher Stowe, and authors of slave narratives, such as Frederick Douglass. Nathaniel Hawthorne's The Scarlet Letter (1850) explored the dark side of American history, as did Herman Melville's Moby-Dick (1851). Major American poets of the 19th century American Renaissance include Walt Whitman, Melville, and Emily Dickinson. Mark Twain was the first major American writer to be born in the West. Henry James achieved international recognition with novels like The Portrait of a Lady (1881). As literacy rates rose, periodicals published more stories centered around industrial workers, women, and the rural poor. Naturalism, regionalism, and realism were the major literary movements of the period. While modernism generally took on an international character, modernist authors working within the United States more often rooted their work in specific regions, peoples, and cultures. Following the Great Migration to northern cities, African-American and black West Indian authors of the Harlem Renaissance developed an independent tradition of literature that rebuked a history of inequality and celebrated black culture. An important cultural export during the Jazz Age, these writings were a key influence on Négritude, a philosophy emerging in the 1930s among francophone writers of the African diaspora. In the 1950s, an ideal of homogeneity led many authors to attempt to write the Great American Novel, while the Beat Generation rejected this conformity, using styles that elevated the impact of the spoken word over mechanics to describe drug use, sexuality, and the failings of society. Contemporary literature is more pluralistic than in previous eras, with the closest thing to a unifying feature being a trend toward self-conscious experiments with language. Twelve American laureates have won the Nobel Prize in Literature. Media in the United States is broadly uncensored, with the First Amendment providing significant protections, as reiterated in New York Times Co. v. United States. The four major broadcasters in the U.S. are the National Broadcasting Company (NBC), Columbia Broadcasting System (CBS), American Broadcasting Company (ABC), and Fox Broadcasting Company (Fox). The four major broadcast television networks are all commercial entities. The U.S. cable television system offers hundreds of channels catering to a variety of niches. In 2021, about 83% of Americans over age 12 listened to broadcast radio, while about 40% listened to podcasts. In the prior year, there were 15,460 licensed full-power radio stations in the U.S. according to the Federal Communications Commission (FCC). Much of the public radio broadcasting is supplied by National Public Radio (NPR), incorporated in February 1970 under the Public Broadcasting Act of 1967. U.S. newspapers with a global reach and reputation include The Wall Street Journal, The New York Times, The Washington Post, and USA Today. About 800 publications are produced in Spanish. With few exceptions, newspapers are privately owned, either by large chains such as Gannett or McClatchy, which own dozens or even hundreds of newspapers; by small chains that own a handful of papers; or, in an increasingly rare situation, by individuals or families. Major cities often have alternative newspapers to complement the mainstream daily papers, such as The Village Voice in New York City and LA Weekly in Los Angeles. The five most-visited websites in the world are Google, YouTube, Facebook, Instagram, and ChatGPT—all of them American-owned. Other popular platforms used include X (formerly Twitter) and Amazon. In 2025, the U.S. was the world's second-largest video game market by revenue (after China). In 2015, the U.S. video game industry consisted of 2,457 companies that employed around 220,000 jobs and generated $30.4 billion in revenue. There are 444 game publishers, developers, and hardware companies in California alone. According to the Game Developers Conference (GDC), the U.S. is the top location for video game development, with 58% of the world's game developers based there in 2025. The United States is well known for its theater. Mainstream theater in the United States derives from the old European theatrical tradition and has been heavily influenced by the British theater. By the middle of the 19th century, America had created new distinct dramatic forms in the Tom Shows, the showboat theater and the minstrel show. The central hub of the American theater scene is the Theater District in Manhattan, with its divisions of Broadway, off-Broadway, and off-off-Broadway. Many movie and television celebrities have gotten their big break working in New York productions. Outside New York City, many cities have professional regional or resident theater companies that produce their own seasons. The biggest-budget theatrical productions are musicals. U.S. theater has an active community theater culture. The Tony Awards recognizes excellence in live Broadway theater and are presented at an annual ceremony in Manhattan. The awards are given for Broadway productions and performances. One is also given for regional theater. Several discretionary non-competitive awards are given as well, including a Special Tony Award, the Tony Honors for Excellence in Theatre, and the Isabelle Stevenson Award. Folk art in colonial America grew out of artisanal craftsmanship in communities that allowed commonly trained people to individually express themselves. It was distinct from Europe's tradition of high art, which was less accessible and generally less relevant to early American settlers. Cultural movements in art and craftsmanship in colonial America generally lagged behind those of Western Europe. For example, the prevailing medieval style of woodworking and primitive sculpture became integral to early American folk art, despite the emergence of Renaissance styles in England in the late 16th and early 17th centuries. The new English styles would have been early enough to make a considerable impact on American folk art, but American styles and forms had already been firmly adopted. Not only did styles change slowly in early America, but there was a tendency for rural artisans there to continue their traditional forms longer than their urban counterparts did—and far longer than those in Western Europe. The Hudson River School was a mid-19th-century movement in the visual arts tradition of European naturalism. The 1913 Armory Show in New York City, an exhibition of European modernist art, shocked the public and transformed the U.S. art scene. American Realism and American Regionalism sought to reflect and give America new ways of looking at itself. Georgia O'Keeffe, Marsden Hartley, and others experimented with new and individualistic styles, which would become known as American modernism. Major artistic movements such as the abstract expressionism of Jackson Pollock and Willem de Kooning and the pop art of Andy Warhol and Roy Lichtenstein developed largely in the United States. Major photographers include Alfred Stieglitz, Edward Steichen, Dorothea Lange, Edward Weston, James Van Der Zee, Ansel Adams, and Gordon Parks. The tide of modernism and then postmodernism has brought global fame to American architects, including Frank Lloyd Wright, Philip Johnson, and Frank Gehry. The Metropolitan Museum of Art in Manhattan is the largest art museum in the United States and the fourth-largest in the world. American folk music encompasses numerous music genres, variously known as traditional music, traditional folk music, contemporary folk music, or roots music. Many traditional songs have been sung within the same family or folk group for generations, and sometimes trace back to such origins as the British Isles, mainland Europe, or Africa. The rhythmic and lyrical styles of African-American music in particular have influenced American music. Banjos were brought to America through the slave trade. Minstrel shows incorporating the instrument into their acts led to its increased popularity and widespread production in the 19th century. The electric guitar, first invented in the 1930s, and mass-produced by the 1940s, had an enormous influence on popular music, in particular due to the development of rock and roll. The synthesizer, turntablism, and electronic music were also largely developed in the U.S. Elements from folk idioms such as the blues and old-time music were adopted and transformed into popular genres with global audiences. Jazz grew from blues and ragtime in the early 20th century, developing from the innovations and recordings of composers such as W.C. Handy and Jelly Roll Morton. Louis Armstrong and Duke Ellington increased its popularity early in the 20th century. Country music developed in the 1920s, bluegrass and rhythm and blues in the 1940s, and rock and roll in the 1950s. In the 1960s, Bob Dylan emerged from the folk revival to become one of the country's most celebrated songwriters. The musical forms of punk and hip hop both originated in the United States in the 1970s. The United States has the world's largest music market, with a total retail value of $15.9 billion in 2022. Most of the world's major record companies are based in the U.S.; they are represented by the Recording Industry Association of America (RIAA). Mid-20th-century American pop stars, such as Frank Sinatra and Elvis Presley, became global celebrities and best-selling music artists, as have artists of the late 20th century, such as Michael Jackson, Madonna, Whitney Houston, and Mariah Carey, and of the early 21st century, such as Eminem, Britney Spears, Lady Gaga, Katy Perry, Taylor Swift and Beyoncé. The United States has the world's largest apparel market by revenue. Apart from professional business attire, American fashion is eclectic and predominantly informal. Americans' diverse cultural roots are reflected in their clothing; however, sneakers, jeans, T-shirts, and baseball caps are emblematic of American styles. New York, with its Fashion Week, is considered to be one of the "Big Four" global fashion capitals, along with Paris, Milan, and London. A study demonstrated that general proximity to Manhattan's Garment District has been synonymous with American fashion since its inception in the early 20th century. A number of well-known designer labels, among them Tommy Hilfiger, Ralph Lauren, Tom Ford and Calvin Klein, are headquartered in Manhattan. Labels cater to niche markets, such as preteens. New York Fashion Week is one of the most influential fashion shows in the world, and is held twice each year in Manhattan; the annual Met Gala, also in Manhattan, has been called the fashion world's "biggest night". The U.S. film industry has a worldwide influence and following. Hollywood, a district in central Los Angeles, the nation's second-most populous city, is also metonymous for the American filmmaking industry. The major film studios of the United States are the primary source of the most commercially successful movies selling the most tickets in the world. Largely centered in the New York City region from its beginnings in the late 19th century through the first decades of the 20th century, the U.S. film industry has since been primarily based in and around Hollywood. Nonetheless, American film companies have been subject to the forces of globalization in the 21st century, and an increasing number of films are made elsewhere. The Academy Awards, popularly known as "the Oscars", have been held annually by the Academy of Motion Picture Arts and Sciences since 1929, and the Golden Globe Awards have been held annually since January 1944. The industry peaked in what is commonly referred to as the "Golden Age of Hollywood", from the early sound period until the early 1960s, with screen actors such as John Wayne and Marilyn Monroe becoming iconic figures. In the 1970s, "New Hollywood", or the "Hollywood Renaissance", was defined by grittier films influenced by French and Italian realist pictures of the post-war period. The 21st century has been marked by the rise of American streaming platforms, which came to rival traditional cinema. Early settlers were introduced by Native Americans to foods such as turkey, sweet potatoes, corn, squash, and maple syrup. Of the most enduring and pervasive examples are variations of the native dish called succotash. Early settlers and later immigrants combined these with foods they were familiar with, such as wheat flour, beef, and milk, to create a distinctive American cuisine. New World crops, especially pumpkin, corn, potatoes, and turkey as the main course are part of a shared national menu on Thanksgiving, when many Americans prepare or purchase traditional dishes to celebrate the occasion. Characteristic American dishes such as apple pie, fried chicken, doughnuts, french fries, macaroni and cheese, ice cream, hamburgers, hot dogs, and American pizza derive from the recipes of various immigrant groups. Mexican dishes such as burritos and tacos preexisted the United States in areas later annexed from Mexico, and adaptations of Chinese cuisine as well as pasta dishes freely adapted from Italian sources are all widely consumed. American chefs have had a significant impact on society both domestically and internationally. In 1946, the Culinary Institute of America was founded by Katharine Angell and Frances Roth. This would become the United States' most prestigious culinary school, where many of the most talented American chefs would study prior to successful careers. The United States restaurant industry was projected at $899 billion in sales for 2020, and employed more than 15 million people, representing 10% of the nation's workforce directly. It is the country's second-largest private employer and the third-largest employer overall. The United States is home to over 220 Michelin star-rated restaurants, 70 of which are in New York City. Wine has been produced in what is now the United States since the 1500s, with the first widespread production beginning in what is now New Mexico in 1628. In the modern U.S., wine production is undertaken in all fifty states, with California producing 84 percent of all U.S. wine. With more than 1,100,000 acres (4,500 km2) under vine, the United States is the fourth-largest wine-producing country in the world, after Italy, Spain, and France. The classic American diner, a casual restaurant type originally intended for the working class, emerged during the 19th century from converted railroad dining cars made stationary. The diner soon evolved into purpose-built structures whose number expanded greatly in the 20th century. The American fast-food industry developed alongside the nation's car culture. American restaurants developed the drive-in format in the 1920s, which they began to replace with the drive-through format by the 1940s. American fast-food restaurant chains, such as McDonald's, Burger King, Chick-fil-A, Kentucky Fried Chicken, Dunkin' Donuts and many others, have numerous outlets around the world. The most popular spectator sports in the U.S. are American football, basketball, baseball, soccer, and ice hockey. Their premier leagues are, respectively, the National Football League, the National Basketball Association, Major League Baseball, Major League Soccer, and the National Hockey League, All these leagues enjoy wide-ranging domestic media coverage and, except for the MLS, all are considered the preeminent leagues in their respective sports in the world. While most major U.S. sports such as baseball and American football have evolved out of European practices, basketball, volleyball, skateboarding, and snowboarding are American inventions, many of which have become popular worldwide. Lacrosse and surfing arose from Native American and Native Hawaiian activities that predate European contact. The market for professional sports in the United States was approximately $69 billion in July 2013, roughly 50% larger than that of Europe, the Middle East, and Africa combined. American football is by several measures the most popular spectator sport in the United States. Although American football does not have a substantial following in other nations, the NFL does have the highest average attendance (67,254) of any professional sports league in the world. In the year 2024, the NFL generated over $23 billion, making them the most valued professional sports league in the United States and the world. Baseball has been regarded as the U.S. "national sport" since the late 19th century. The most-watched individual sports in the U.S. are golf and auto racing, particularly NASCAR and IndyCar. On the collegiate level, earnings for the member institutions exceed $1 billion annually, and college football and basketball attract large audiences, as the NCAA March Madness tournament and the College Football Playoff are some of the most watched national sporting events. In the U.S., the intercollegiate sports level serves as the main feeder system for professional and Olympic sports, with significant exceptions such as Minor League Baseball. This differs greatly from practices in nearly all other countries, where publicly and privately funded sports organizations serve this function. Eight Olympic Games have taken place in the United States. The 1904 Summer Olympics in St. Louis, Missouri, were the first-ever Olympic Games held outside of Europe. The Olympic Games will be held in the U.S. for a ninth time when Los Angeles hosts the 2028 Summer Olympics. U.S. athletes have won a total of 2,968 medals (1,179 gold) at the Olympic Games, the most of any country. In other international competition, the United States is the home of a number of prestigious events, including the America's Cup, World Baseball Classic, the U.S. Open, and the Masters Tournament. The U.S. men's national soccer team has qualified for eleven World Cups, while the women's national team has won the FIFA Women's World Cup and Olympic soccer tournament four and five times, respectively. The 1999 FIFA Women's World Cup was hosted by the United States. Its final match was attended by 90,185, setting the world record for largest women's sporting event crowd at the time. The United States hosted the 1994 FIFA World Cup and will co-host, along with Canada and Mexico, the 2026 FIFA World Cup. See also Notes References This article incorporates text from a free content work. Licensed under CC BY-SA IGO 3.0 (license statement/permission). Text taken from World Food and Agriculture – Statistical Yearbook 2023​, FAO, FAO. External links 40°N 100°W / 40°N 100°W / 40; -100 (United States of America)
========================================
[SOURCE: https://en.wikipedia.org/wiki/OpenAI#cite_ref-36] | [TOKENS: 8773]
Contents OpenAI OpenAI is an American artificial intelligence research organization comprising both a non-profit foundation and a controlled for-profit public benefit corporation (PBC), headquartered in San Francisco. It aims to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". OpenAI is widely recognized for its development of the GPT family of large language models, the DALL-E series of text-to-image models, and the Sora series of text-to-video models, which have influenced industry research and commercial applications. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI. The organization was founded in 2015 in Delaware but evolved a complex corporate structure. As of October 2025, following restructuring approved by California and Delaware regulators, the non-profit OpenAI Foundation holds 26% of the for-profit OpenAI Group PBC, with Microsoft holding 27% and employees/other investors holding 47%. Under its governance arrangements, the OpenAI Foundation holds the authority to appoint the board of the for-profit OpenAI Group PBC, a mechanism designed to align the entity’s strategic direction with the Foundation’s charter. Microsoft previously invested over $13 billion into OpenAI, and provides Azure cloud computing resources. In October 2025, OpenAI conducted a $6.6 billion share sale that valued the company at $500 billion. In 2023 and 2024, OpenAI faced multiple lawsuits for alleged copyright infringement against authors and media companies whose work was used to train some of OpenAI's products. In November 2023, OpenAI's board removed Sam Altman as CEO, citing a lack of confidence in him, but reinstated him five days later following a reconstruction of the board. Throughout 2024, roughly half of then-employed AI safety researchers left OpenAI, citing the company's prominent role in an industry-wide problem. Founding In December 2015, OpenAI was founded as a not for profit organization by Sam Altman, Elon Musk, Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk as the co-chairs. A total of $1 billion in capital was pledged by Sam Altman, Greg Brockman, Elon Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), and Infosys. However, the actual capital collected significantly lagged pledges. According to company disclosures, only $130 million had been received by 2019. In its founding charter, OpenAI stated an intention to collaborate openly with other institutions by making certain patents and research publicly available, but later restricted access to its most capable models, citing competitive and safety concerns. OpenAI was initially run from Brockman's living room. It was later headquartered at the Pioneer Building in the Mission District, San Francisco. According to OpenAI's charter, its founding mission is "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity." Musk and Altman stated in 2015 that they were partly motivated by concerns about AI safety and existential risk from artificial general intelligence. OpenAI stated that "it's hard to fathom how much human-level AI could benefit society", and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly". The startup also wrote that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible", and that "because of AI's surprising history, it's hard to predict when human-level AI might come within reach. When it does, it'll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest." Co-chair Sam Altman expected a decades-long project that eventually surpasses human intelligence. Brockman met with Yoshua Bengio, one of the "founding fathers" of deep learning, and drew up a list of great AI researchers. Brockman was able to hire nine of them as the first employees in December 2015. OpenAI did not pay AI researchers salaries comparable to those of Facebook or Google. It also did not pay stock options which AI researchers typically get. Nevertheless, OpenAI spent $7 million on its first 52 employees in 2016. OpenAI's potential and mission drew these researchers to the firm; a Google employee said he was willing to leave Google for OpenAI "partly because of the very strong group of people and, to a very large extent, because of its mission." OpenAI co-founder Wojciech Zaremba stated that he turned down "borderline crazy" offers of two to three times his market value to join OpenAI instead. In April 2016, OpenAI released a public beta of "OpenAI Gym", its platform for reinforcement learning research. Nvidia gifted its first DGX-1 supercomputer to OpenAI in August 2016 to help it train larger and more complex AI models with the capability of reducing processing time from six days to two hours. In December 2016, OpenAI released "Universe", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites, and other applications. Corporate structure In 2019, OpenAI transitioned from non-profit to "capped" for-profit, with the profit being capped at 100 times any investment. According to OpenAI, the capped-profit model allows OpenAI Global, LLC to legally attract investment from venture funds and, in addition, to grant employees stakes in the company. Many top researchers work for Google Brain, DeepMind, or Facebook, which offer equity that a nonprofit would be unable to match. Before the transition, OpenAI was legally required to publicly disclose the compensation of its top employees. The company then distributed equity to its employees and partnered with Microsoft, announcing an investment package of $1 billion into the company. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft. OpenAI Global, LLC then announced its intention to commercially license its technologies. It planned to spend $1 billion "within five years, and possibly much faster". Altman stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need "more capital than any non-profit has ever raised" to achieve artificial general intelligence. The nonprofit, OpenAI, Inc., is the sole controlling shareholder of OpenAI Global, LLC, which, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI, Inc.'s nonprofit charter. A majority of OpenAI, Inc.'s board is barred from having financial stakes in OpenAI Global, LLC. In addition, minority members with a stake in OpenAI Global, LLC are barred from certain votes due to conflict of interest. Some researchers have argued that OpenAI Global, LLC's switch to for-profit status is inconsistent with OpenAI's claims to be "democratizing" AI. On February 29, 2024, Elon Musk filed a lawsuit against OpenAI and CEO Sam Altman, accusing them of shifting focus from public benefit to profit maximization—a case OpenAI dismissed as "incoherent" and "frivolous," though Musk later revived legal action against Altman and others in August. On April 9, 2024, OpenAI countersued Musk in federal court, alleging that he had engaged in "bad-faith tactics" to slow the company's progress and seize its innovations for his personal benefit. OpenAI also argued that Musk had previously supported the creation of a for-profit structure and had expressed interest in controlling OpenAI himself. The countersuit seeks damages and legal measures to prevent further alleged interference. On February 10, 2025, a consortium of investors led by Elon Musk submitted a $97.4 billion unsolicited bid to buy the nonprofit that controls OpenAI, declaring willingness to match or exceed any better offer. The offer was rejected on 14 February 2025, with OpenAI stating that it was not for sale, but the offer complicated Altman's restructuring plan by suggesting a lower bar for how much the nonprofit should be valued. OpenAI, Inc. was originally designed as a nonprofit in order to ensure that AGI "benefits all of humanity" rather than "the private gain of any person". In 2019, it created OpenAI Global, LLC, a capped-profit subsidiary controlled by the nonprofit. In December 2024, OpenAI proposed a restructuring plan to convert the capped-profit into a Delaware-based public benefit corporation (PBC), and to release it from the control of the nonprofit. The nonprofit would sell its control and other assets, getting equity in return, and would use it to fund and pursue separate charitable projects, including in science and education. OpenAI's leadership described the change as necessary to secure additional investments, and claimed that the nonprofit's founding mission to ensure AGI "benefits all of humanity" would be better fulfilled. The plan has been criticized by former employees. A legal letter named "Not For Private Gain" asked the attorneys general of California and Delaware to intervene, stating that the restructuring is illegal and would remove governance safeguards from the nonprofit and the attorneys general. The letter argues that OpenAI's complex structure was deliberately designed to remain accountable to its mission, without the conflicting pressure of maximizing profits. It contends that the nonprofit is best positioned to advance its mission of ensuring AGI benefits all of humanity by continuing to control OpenAI Global, LLC, whatever the amount of equity that it could get in exchange. PBCs can choose how they balance their mission with profit-making. Controlling shareholders have a large influence on how closely a PBC sticks to its mission. On October 28, 2025, OpenAI announced that it had adopted the new PBC corporate structure after receiving approval from the attorneys general of California and Delaware. Under the new structure, OpenAI's for-profit branch became a public benefit corporation known as OpenAI Group PBC, while the non-profit was renamed to the OpenAI Foundation. The OpenAI Foundation holds a 26% stake in the PBC, while Microsoft holds a 27% stake and the remaining 47% is owned by employees and other investors. All members of the OpenAI Group PBC board of directors will be appointed by the OpenAI Foundation, which can remove them at any time. Members of the Foundation's board will also serve on the for-profit board. The new structure allows the for-profit PBC to raise investor funds like most traditional tech companies, including through an initial public offering, which Altman claimed was the most likely path forward. In January 2023, OpenAI Global, LLC was in talks for funding that would value the company at $29 billion, double its 2021 value. On January 23, 2023, Microsoft announced a new US$10 billion investment in OpenAI Global, LLC over multiple years, partially needed to use Microsoft's cloud-computing service Azure. From September to December, 2023, Microsoft rebranded all variants of its Copilot to Microsoft Copilot, and they added MS-Copilot to many installations of Windows and released Microsoft Copilot mobile apps. Following OpenAI's 2025 restructuring, Microsoft owns a 27% stake in the for-profit OpenAI Group PBC, valued at $135 billion. In a deal announced the same day, OpenAI agreed to purchase $250 billion of Azure services, with Microsoft ceding their right of first refusal over OpenAI's future cloud computing purchases. As part of the deal, OpenAI will continue to share 20% of its revenue with Microsoft until it achieves AGI, which must now be verified by an independent panel of experts. The deal also loosened restrictions on both companies working with third parties, allowing Microsoft to pursue AGI independently and allowing OpenAI to develop products with other companies. In 2017, OpenAI spent $7.9 million, a quarter of its functional expenses, on cloud computing alone. In comparison, DeepMind's total expenses in 2017 were $442 million. In the summer of 2018, training OpenAI's Dota 2 bots required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks. In October 2024, OpenAI completed a $6.6 billion capital raise with a $157 billion valuation including investments from Microsoft, Nvidia, and SoftBank. On January 21, 2025, Donald Trump announced The Stargate Project, a joint venture between OpenAI, Oracle, SoftBank and MGX to build an AI infrastructure system in conjunction with the US government. The project takes its name from OpenAI's existing "Stargate" supercomputer project and is estimated to cost $500 billion. The partners planned to fund the project over the next four years. In July, the United States Department of Defense announced that OpenAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and xAI. In the same month, the company made a deal with the UK Government to use ChatGPT and other AI tools in public services. OpenAI subsequently began a $50 million fund to support nonprofit and community organizations. In April 2025, OpenAI raised $40 billion at a $300 billion post-money valuation, which was the highest-value private technology deal in history. The financing round was led by SoftBank, with other participants including Microsoft, Coatue, Altimeter and Thrive. In July 2025, the company reported annualized revenue of $12 billion. This was an increase from $3.7 billion in 2024, which was driven by ChatGPT subscriptions, which reached 20 million paid subscribers by April 2025, up from 15.5 million at the end of 2024, alongside a rapidly expanding enterprise customer base that grew to five million business users. The company’s cash burn remains high because of the intensive computational costs required to train and operate large language models. It projects an $8 billion operating loss in 2025. OpenAI reports revised long-term spending projections totaling approximately $115 billion through 2029, with annual expenditures projected to escalate significantly, reaching $17 billion in 2026, $35 billion in 2027, and $45 billion in 2028. These expenditures are primarily allocated toward expanding compute infrastructure, developing proprietary AI chips, constructing data centers, and funding intensive model training programs, with more than half of the spending through the end of the decade expected to support research-intensive compute for model training and development. The company's financial strategy prioritizes market expansion and technological advancement over near-term profitability, with OpenAI targeting cash-flow-positive operations by 2029 and projecting revenue of approximately $200 billion by 2030. This aggressive spending trajectory underscores both the enormous capital requirements of scaling cutting-edge AI technology and OpenAI's commitment to maintaining its position as a leader in the artificial intelligence industry. In October 2025, OpenAI completed an employee share sale of up to $10 billion to existing investors which valued the company at $500 billion. The deal values OpenAI as the most valuable privately owned company in the world—surpassing SpaceX as the world's most valuable private company. On November 17, 2023, Sam Altman was removed as CEO when its board of directors (composed of Helen Toner, Ilya Sutskever, Adam D'Angelo and Tasha McCauley) cited a lack of confidence in him. Chief Technology Officer Mira Murati took over as interim CEO. Greg Brockman, the president of OpenAI, was also removed as chairman of the board and resigned from the company's presidency shortly thereafter. Three senior OpenAI researchers subsequently resigned: director of research and GPT-4 lead Jakub Pachocki, head of AI risk Aleksander Mądry, and researcher Szymon Sidor. On November 18, 2023, there were reportedly talks of Altman returning as CEO amid pressure placed upon the board by investors such as Microsoft and Thrive Capital, who objected to Altman's departure. Although Altman himself spoke in favor of returning to OpenAI, he has since stated that he considered starting a new company and bringing former OpenAI employees with him if talks to reinstate him didn't work out. The board members agreed "in principle" to resign if Altman returned. On November 19, 2023, negotiations with Altman to return failed and Murati was replaced by Emmett Shear as interim CEO. The board initially contacted Anthropic CEO Dario Amodei (a former OpenAI executive) about replacing Altman, and proposed a merger of the two companies, but both offers were declined. On November 20, 2023, Microsoft CEO Satya Nadella announced Altman and Brockman would be joining Microsoft to lead a new advanced AI research team, but added that they were still committed to OpenAI despite recent events. Before the partnership with Microsoft was finalized, Altman gave the board another opportunity to negotiate with him. About 738 of OpenAI's 770 employees, including Murati and Sutskever, signed an open letter stating they would quit their jobs and join Microsoft if the board did not rehire Altman and then resign. This prompted OpenAI investors to consider legal action against the board as well. In response, OpenAI management sent an internal memo to employees stating that negotiations with Altman and the board had resumed and would take some time. On November 21, 2023, after continued negotiations, Altman and Brockman returned to the company in their prior roles along with a reconstructed board made up of new members Bret Taylor (as chairman) and Lawrence Summers, with D'Angelo remaining. According to subsequent reporting, shortly before Altman’s firing, some employees raised concerns to the board about how he had handled the safety implications of a recent internal AI capability discovery. On November 29, 2023, OpenAI announced that an anonymous Microsoft employee had joined the board as a non-voting member to observe the company's operations; Microsoft resigned from the board in July 2024. In February 2024, the Securities and Exchange Commission subpoenaed OpenAI's internal communication to determine if Altman's alleged lack of candor misled investors. In 2024, following the temporary removal of Sam Altman and his return, many employees gradually left OpenAI, including most of the original leadership team and a significant number of AI safety researchers. In August 2023, it was announced that OpenAI had acquired the New York-based start-up Global Illumination, a company that deploys AI to develop digital infrastructure and creative tools. In June 2024, OpenAI acquired Multi, a startup focused on remote collaboration. In March 2025, OpenAI reached a deal with CoreWeave to acquire $350 million worth of CoreWeave shares and access to AI infrastructure, in return for $11.9 billion paid over five years. Microsoft was already CoreWeave's biggest customer in 2024. Alongside their other business dealings, OpenAI and Microsoft were renegotiating the terms of their partnership to facilitate a potential future initial public offering by OpenAI, while ensuring Microsoft's continued access to advanced AI models. On May 21, OpenAI announced the $6.5 billion acquisition of io, an AI hardware start-up founded by former Apple designer Jony Ive in 2024. In September 2025, OpenAI agreed to acquire the product testing startup Statsig for $1.1 billion in an all-stock deal and appointed Statsig's founding CEO Vijaye Raji as OpenAI's chief technology officer of applications. The company also announced development of an AI-driven hiring service designed to rival LinkedIn. OpenAI acquired personal finance app Roi in October 2025. In October 2025, OpenAI acquired Software Applications Incorporated, the developer of Sky, a macOS-based natural language interface designed to operate across desktop applications. The Sky team joined OpenAI, and the company announced plans to integrate Sky’s capabilities into ChatGPT. In December 2025, it was announced OpenAI had agreed to acquire Neptune, an AI tooling startup that helps companies track and manage model training, for an undisclosed amount. In January 2026, it was announced OpenAI had acquired healthcare technology startup Torch for approximately $60 million. The acquisition followed the launch of OpenAI’s ChatGPT Health product and was intended to strengthen the company’s medical data and healthcare artificial intelligence capabilities. OpenAI has been criticized for outsourcing the annotation of data sets to Sama, a company based in San Francisco that employed workers in Kenya. These annotations were used to train an AI model to detect toxicity, which could then be used to moderate toxic content, notably from ChatGPT's training data and outputs. However, these pieces of text usually contained detailed descriptions of various types of violence, including sexual violence. The investigation uncovered that OpenAI began sending snippets of data to Sama as early as November 2021. The four Sama employees interviewed by Time described themselves as mentally scarred. OpenAI paid Sama $12.50 per hour of work, and Sama was redistributing the equivalent of between $1.32 and $2.00 per hour post-tax to its annotators. Sama's spokesperson said that the $12.50 was also covering other implicit costs, among which were infrastructure expenses, quality assurance and management. In 2024, OpenAI began collaborating with Broadcom to design a custom AI chip capable of both training and inference, targeted for mass production in 2026 and to be manufactured by TSMC on a 3 nm process node. This initiative intended to reduce OpenAI's dependence on Nvidia GPUs, which are costly and face high demand in the market. In January 2024, Arizona State University purchased ChatGPT Enterprise in OpenAI's first deal with a university. In June 2024, Apple Inc. signed a contract with OpenAI to integrate ChatGPT features into its products as part of its new Apple Intelligence initiative. In June 2025, OpenAI began renting Google Cloud's Tensor Processing Units (TPUs) to support ChatGPT and related services, marking its first meaningful use of non‑Nvidia AI chips. In September 2025, it was revealed that OpenAI signed a contract with Oracle to purchase $300 billion in computing power over the next five years. In September 2025, OpenAI and NVIDIA announced a memorandum of understanding that included a potential deployment of at least 10 gigawatts of NVIDIA systems and a $100 billion investment from NVIDIA in OpenAI. OpenAI expected the negotiations to be completed within weeks. As of January 2026, this has not been realized, and the two sides are rethinking the future of their partnership. In October 2025, OpenAI announced a multi-billion dollar deal with AMD. OpenAI committed to purchasing six gigawatts worth of AMD chips, starting with the MI450. OpenAI will have the option to buy up to 160 million shares of AMD, about 10% of the company, depending on development, performance and share price targets. In December 2025, Disney said it would make a $1 billion investment in OpenAI, and signed a three-year licensing deal that will let users generate videos using Sora—OpenAI's short-form AI video platform. More than 200 Disney, Marvel, Star Wars and Pixar characters will be available to OpenAI users. In early 2026, Amazon entered advanced discussions to invest up to $50 billion in OpenAI as part of a potential artificial intelligence partnership. Under the proposed agreement, OpenAI’s models could be integrated into Amazon’s digital assistant Alexa and other internal projects. OpenAI provides LLMs to the Artificial Intelligence Cyber Challenge and to the Advanced Research Projects Agency for Health. In October 2024, The Intercept revealed that OpenAI's tools are considered "essential" for AFRICOM's mission and included in an "Exception to Fair Opportunity" contractual agreement between the United States Department of Defense and Microsoft. In December 2024, OpenAI said it would partner with defense-tech company Anduril to build drone defense technologies for the United States and its allies. In 2025, OpenAI's Chief Product Officer, Kevin Weil, was commissioned lieutenant colonel in the U.S. Army to join Detachment 201 as senior advisor. In June 2025, the U.S. Department of Defense awarded OpenAI a $200 million one-year contract to develop AI tools for military and national security applications. OpenAI announced a new program, OpenAI for Government, to give federal, state, and local governments access to its models, including ChatGPT. Services In February 2019, GPT-2 was announced, which gained attention for its ability to generate human-like text. In 2020, OpenAI announced GPT-3, a language model trained on large internet datasets. GPT-3 is aimed at natural language answering questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named the API, would form the heart of its first commercial product. Eleven employees left OpenAI, mostly between December 2020 and January 2021, in order to establish Anthropic. In 2021, OpenAI introduced DALL-E, a specialized deep learning model adept at generating complex digital images from textual descriptions, utilizing a variant of the GPT-3 architecture. In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days. According to anonymous sources cited by Reuters in December 2022, OpenAI Global, LLC was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024. After ChatGPT was launched, Google announced a similar chatbot, Bard, amid internal concerns that ChatGPT could threaten Google’s position as a primary source of online information. On February 7, 2023, Microsoft announced that it was building AI technology based on the same foundation as ChatGPT into Microsoft Bing, Edge, Microsoft 365 and other products. On March 14, 2023, OpenAI released GPT-4, both as an API (with a waitlist) and as a feature of ChatGPT Plus. On November 6, 2023, OpenAI launched GPTs, allowing individuals to create customized versions of ChatGPT for specific purposes, further expanding the possibilities of AI applications across various industries. On November 14, 2023, OpenAI announced they temporarily suspended new sign-ups for ChatGPT Plus due to high demand. Access for newer subscribers re-opened a month later on December 13. In December 2024, the company launched the Sora model. It also launched OpenAI o1, an early reasoning model that was internally codenamed strawberry. Additionally, ChatGPT Pro—a $200/month subscription service offering unlimited o1 access and enhanced voice features—was introduced, and preliminary benchmark results for the upcoming OpenAI o3 models were shared. On January 23, 2025, OpenAI released Operator, an AI agent and web automation tool for accessing websites to execute goals defined by users. The feature was only available to Pro users in the United States. OpenAI released deep research agent, nine days later. It scored a 27% accuracy on the benchmark Humanity's Last Exam (HLE). Altman later stated GPT-4.5 would be the last model without full chain-of-thought reasoning. In July 2025, reports indicated that AI models by both OpenAI and Google DeepMind solved mathematics problems at the level of top-performing students in the International Mathematical Olympiad. OpenAI's large language model was able to achieve gold medal-level performance, reflecting significant progress in AI's reasoning abilities. On October 6, 2025, OpenAI unveiled its Agent Builder platform during the company's DevDay event. The platform includes a visual drag-and-drop interface that lets developers and businesses design, test, and deploy agentic workflows with limited coding. On October 21, 2025, OpenAI introduced ChatGPT Atlas, a browser integrating the ChatGPT assistant directly into web navigation, to compete with existing browsers such as Google Chrome and Apple Safari. On December 11, 2025, OpenAI announced GPT-5.2. This model will be better at creating spreadsheets, building presentations, perceiving images, writing code and understanding long context. On January 27, 2026, OpenAI introduced Prism, a LaTeX-native workspace meant to assist scientists to help with research and writing. The platform utilizes GPT-5.2 as a backend to automate the process of drafting for scientific papers, including features for managing citations, complex equation formatting, and real-time collaborative editing. In March 2023, the company was criticized for disclosing particularly few technical details about products like GPT-4, contradicting its initial commitment to openness and making it harder for independent researchers to replicate its work and develop safeguards. OpenAI cited competitiveness and safety concerns to justify this repudiation. OpenAI's former chief scientist Ilya Sutskever argued in 2023 that open-sourcing increasingly capable models was increasingly risky, and that the safety reasons for not open-sourcing the most potent AI models would become "obvious" in a few years. In September 2025, OpenAI published a study on how people use ChatGPT for everyday tasks. The study found that "non-work tasks" (according to an LLM-based classifier) account for more than 72 percent of all ChatGPT usage, with a minority of overall usage related to business productivity. In July 2023, OpenAI launched the superalignment project, aiming within four years to determine how to align future superintelligent systems. OpenAI promised to dedicate 20% of its computing resources to the project, although the team denied receiving anything close to 20%. OpenAI ended the project in May 2024 after its co-leaders Ilya Sutskever and Jan Leike left the company. In August 2025, OpenAI was criticized after thousands of private ChatGPT conversations were inadvertently exposed to public search engines like Google due to an experimental "share with search engines" feature. The opt-in toggle, intended to allow users to make specific chats discoverable, resulted in some discussions including personal details such as names, locations, and intimate topics appearing in search results when users accidentally enabled it while sharing links. OpenAI announced the feature's permanent removal on August 1, 2025, and the company began coordinating with search providers to remove the exposed content, emphasizing that it was not a security breach but a design flaw that heightened privacy risks. CEO Sam Altman acknowledged the issue in a podcast, noting users often treat ChatGPT as a confidant for deeply personal matters, which amplified concerns about AI handling sensitive data. Management In 2018, Musk resigned from his Board of Directors seat, citing "a potential future conflict [of interest]" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars. OpenAI stated that Musk's financial contributions were below $45 million. On March 3, 2023, Reid Hoffman resigned from his board seat, citing a desire to avoid conflicts of interest with his investments in AI companies via Greylock Partners, and his co-founding of the AI startup Inflection AI. Hoffman remained on the board of Microsoft, a major investor in OpenAI. In May 2024, Chief Scientist Ilya Sutskever resigned and was succeeded by Jakub Pachocki. Co-leader Jan Leike also departed amid concerns over safety and trust. OpenAI then signed deals with Reddit, News Corp, Axios, and Vox Media. Paul Nakasone then joined the board of OpenAI. In August 2024, cofounder John Schulman left OpenAI to join Anthropic, and OpenAI's president Greg Brockman took extended leave until November. In September 2024, CTO Mira Murati left the company. In November 2025, Lawrence Summers resigned from the board of directors. Governance and legal issues In May 2023, Sam Altman, Greg Brockman and Ilya Sutskever posted recommendations for the governance of superintelligence. They stated that superintelligence could happen within the next 10 years, allowing a "dramatically more prosperous future" and that "given the possibility of existential risk, we can't just be reactive". They proposed creating an international watchdog organization similar to IAEA to oversee AI systems above a certain capability threshold, suggesting that relatively weak AI systems on the other side should not be overly regulated. They also called for more technical safety research for superintelligences, and asked for more coordination, for example through governments launching a joint project which "many current efforts become part of". In July 2023, the FTC issued a civil investigative demand to OpenAI to investigate whether the company's data security and privacy practices to develop ChatGPT were unfair or harmed consumers (including by reputational harm) in violation of Section 5 of the Federal Trade Commission Act of 1914. These are typically preliminary investigative matters and are nonpublic, but the FTC's document was leaked. In July 2023, the FTC launched an investigation into OpenAI over allegations that the company scraped public data and published false and defamatory information. They asked OpenAI for comprehensive information about its technology and privacy safeguards, as well as any steps taken to prevent the recurrence of situations in which its chatbot generated false and derogatory content about people. The agency also raised concerns about ‘circular’ spending arrangements—for example, Microsoft extending Azure credits to OpenAI while both companies shared engineering talent—and warned that such structures could negatively affect the public. In September 2024, OpenAI's global affairs chief endorsed the UK's "smart" AI regulation during testimony to a House of Lords committee. In February 2025, OpenAI CEO Sam Altman stated that the company is interested in collaborating with the People's Republic of China, despite regulatory restrictions imposed by the U.S. government. This shift comes in response to the growing influence of the Chinese artificial intelligence company DeepSeek, which has disrupted the AI market with open models, including DeepSeek V3 and DeepSeek R1. Following DeepSeek's market emergence, OpenAI enhanced security protocols to protect proprietary development techniques from industrial espionage. Some industry observers noted similarities between DeepSeek's model distillation approach and OpenAI's methodology, though no formal intellectual property claim was filed. According to Oliver Roberts, in March 2025, the United States had 781 state AI bills or laws. OpenAI advocated for preempting state AI laws with federal laws. According to Scott Kohler, OpenAI has opposed California's AI legislation and suggested that the state bill encroaches on a more competent federal government. Public Citizen opposed a federal preemption on AI and pointed to OpenAI's growth and valuation as evidence that existing state laws have not hampered innovation. Before May 2024, OpenAI required departing employees to sign a lifelong non-disparagement agreement forbidding them from criticizing OpenAI and acknowledging the existence of the agreement. Daniel Kokotajlo, a former employee, publicly stated that he forfeited his vested equity in OpenAI in order to leave without signing the agreement. Sam Altman stated that he was unaware of the equity cancellation provision, and that OpenAI never enforced it to cancel any employee's vested equity. However, leaked documents and emails refute this claim. On May 23, 2024, OpenAI sent a memo releasing former employees from the agreement. OpenAI was sued for copyright infringement by authors Sarah Silverman, Matthew Butterick, Paul Tremblay and Mona Awad in July 2023. In September 2023, 17 authors, including George R. R. Martin, John Grisham, Jodi Picoult and Jonathan Franzen, joined the Authors Guild in filing a class action lawsuit against OpenAI, alleging that the company's technology was illegally using their copyrighted work. The New York Times also sued the company in late December 2023. In May 2024 it was revealed that OpenAI had destroyed its Books1 and Books2 training datasets, which were used in the training of GPT-3, and which the Authors Guild believed to have contained over 100,000 copyrighted books. In 2021, OpenAI developed a speech recognition tool called Whisper. OpenAI used it to transcribe more than one million hours of YouTube videos into text for training GPT-4. The automated transcription of YouTube videos raised concerns within OpenAI employees regarding potential violations of YouTube's terms of service, which prohibit the use of videos for applications independent of the platform, as well as any type of automated access to its videos. Despite these concerns, the project proceeded with notable involvement from OpenAI's president, Greg Brockman. The resulting dataset proved instrumental in training GPT-4. In February 2024, The Intercept as well as Raw Story and Alternate Media Inc. filed lawsuit against OpenAI on copyright litigation ground. The lawsuit is said to have charted a new legal strategy for digital-only publishers to sue OpenAI. On April 30, 2024, eight newspapers filed a lawsuit in the Southern District of New York against OpenAI and Microsoft, claiming illegal harvesting of their copyrighted articles. The suing publications included The Mercury News, The Denver Post, The Orange County Register, St. Paul Pioneer Press, Chicago Tribune, Orlando Sentinel, Sun Sentinel, and New York Daily News. In June 2023, a lawsuit claimed that OpenAI scraped 300 billion words online without consent and without registering as a data broker. It was filed in San Francisco, California, by sixteen anonymous plaintiffs. They also claimed that OpenAI and its partner as well as customer Microsoft continued to unlawfully collect and use personal data from millions of consumers worldwide to train artificial intelligence models. On May 22, 2024, OpenAI entered into an agreement with News Corp to integrate news content from The Wall Street Journal, the New York Post, The Times, and The Sunday Times into its AI platform. Meanwhile, other publications like The New York Times chose to sue OpenAI and Microsoft for copyright infringement over the use of their content to train AI models. In November 2024, a coalition of Canadian news outlets, including the Toronto Star, Metroland Media, Postmedia, The Globe and Mail, The Canadian Press and CBC, sued OpenAI for using their news articles to train its software without permission. In October 2024 during a New York Times interview, Suchir Balaji accused OpenAI of violating copyright law in developing its commercial LLMs which he had helped engineer. He was a likely witness in a major copyright trial against the AI company, and was one of several of its current or former employees named in court filings as potentially having documents relevant to the case. On November 26, 2024, Balaji died by suicide. His death prompted the circulation of conspiracy theories alleging that he had been deliberately silenced. California Congressman Ro Khanna endorsed calls for an investigation. On April 24, 2025, Ziff Davis sued OpenAI in Delaware federal court for copyright infringement. Ziff Davis is known for publications such as ZDNet, PCMag, CNET, IGN and Lifehacker. In April 2023, the EU's European Data Protection Board (EDPB) formed a dedicated task force on ChatGPT "to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities" based on the "enforcement action undertaken by the Italian data protection authority against OpenAI about the ChatGPT service". In late April 2024 NOYB filed a complaint with the Austrian Datenschutzbehörde against OpenAI for violating the European General Data Protection Regulation. A text created with ChatGPT gave a false date of birth for a living person without giving the individual the option to see the personal data used in the process. A request to correct the mistake was denied. Additionally, neither the recipients of ChatGPT's work nor the sources used, could be made available, OpenAI claimed. OpenAI was criticized for lifting its ban on using ChatGPT for "military and warfare". Up until January 10, 2024, its "usage policies" included a ban on "activity that has high risk of physical harm, including", specifically, "weapons development" and "military and warfare". Its new policies prohibit "[using] our service to harm yourself or others" and to "develop or use weapons". In August 2025, the parents of a 16-year-old boy who died by suicide filed a wrongful death lawsuit against OpenAI (and CEO Sam Altman), alleging that months of conversations with ChatGPT about mental health and methods of self-harm contributed to their son's death and that safeguards were inadequate for minors. OpenAI expressed condolences and said it was strengthening protections (including updated crisis response behavior and parental controls). Coverage described it as a first-of-its-kind wrongful death case targeting the company's chatbot. The complaint was filed in California state court in San Francisco. In November 2025, the Social Media Victims Law Center and Tech Justice Law Project filed seven lawsuits against OpenAI, of which four lawsuits alleged wrongful death. The suits were filed on behalf of Zane Shamblin, 23, of Texas; Amaurie Lacey, 17, of Georgia; Joshua Enneking, 26, of Florida; and Joe Ceccanti, 48, of Oregon, who each committed suicide after prolonged ChatGPT usage. In December 2025, Stein-Erik Soelberg, who was 56 years old at the time, allegedly murdered his mother Suzanne Adams. In the months prior the paranoid, delusional man often discussed his ideas with ChatGPT. Adam's estate then sued OpenAI claiming that the company shared responsibility due to the risk of chatbot psychosis despite the fact that chatbot psychosis is not a real medical diagnosis. OpenAI responded saying they will make ChatGPT safer for users disconnected from reality. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Tax_on_carbon_emissions] | [TOKENS: 4948]
Contents Carbon tax A carbon tax is a tax levied on the carbon emissions from producing goods and services. Carbon taxes are intended to make visible the hidden social costs of carbon emissions. They are designed to reduce greenhouse gas emissions by essentially increasing the price of fossil fuels. This both decreases demand for goods and services that produce high emissions and incentivizes making them less carbon-intensive. When a fossil fuel such as coal, petroleum, or natural gas is burned, most or all of its carbon is converted to CO2. Greenhouse gas emissions cause climate change. This negative externality can be reduced by taxing carbon content at any point in the product cycle. A carbon tax as well as carbon emission trading is used within the carbon price concept. Two common economic alternatives to carbon taxes are tradable permits with carbon credits and subsidies. In its simplest form, a carbon tax covers only CO2 emissions. It could also cover other greenhouse gases, such as methane or nitrous oxide, by taxing such emissions based on their CO2-equivalent global warming potential. Research shows that carbon taxes do often reduce emissions. Many economists argue that carbon taxes are the most efficient (lowest cost) way to tackle climate change. As of 2019[update], carbon taxes have either been implemented or are scheduled for implementation in 25 countries. 46 countries have put some form of price on carbon, either through carbon taxes or carbon emission trading schemes. Some experts observe that a carbon tax can negatively affect public welfare, tending to hit low- and middle-income households the hardest and making their necessities more expensive (for instance, the tax might drive up prices for, say, petrol and electricity). Alternatively, the tax can be too conservative, making "comparatively small dents in overall emissions". To make carbon taxes fairer, policymakers can try to redistribute the revenue generated from carbon taxes to low-income groups by various fiscal means. Such a policy initiative becomes a carbon fee and dividend, rather than a plain tax. Purpose Carbon dioxide is one of several heat-trapping greenhouse gases (others include methane and water vapor) emitted as a result of human activities. The scientific consensus is that human-induced greenhouse gas emissions are the primary cause of climate change, and that carbon dioxide is the most important of the anthropogenic greenhouse gases. Worldwide, 27 billion tonnes of carbon dioxide are produced by human activity annually. The physical effect of CO2 in the atmosphere can be measured as a change in the Earth-atmosphere system's energy balance – the radiative forcing of CO2. Different greenhouse gases have different physical properties: the global warming potential is an internationally accepted scale of equivalence for other greenhouse gases in units of tonnes of carbon dioxide equivalent. Carbon taxes are designed to reduce greenhouse gas emissions by increasing prices of the fossil fuels that emit them when burned. This both decreases demand for goods and services that produce high emissions and incentivizes making them less carbon-intensive. Economic theory A carbon tax is a form of pollution tax. David Gordon Wilson first proposed this type of tax in 1973. Unlike classic command and control regulations, which explicitly limit or prohibit emissions by each individual polluter, a carbon tax aims to allow market forces to determine the most efficient way to reduce pollution. A carbon tax is an indirect tax—a tax on a transaction—as opposed to a direct tax, which taxes income. Carbon taxes are price instruments since they set a price rather than an emission limit. In addition to creating incentives for energy conservation, a carbon tax puts renewable energy such as wind, solar and geothermal on a more competitive footing. In economic theory, pollution is considered a negative externality, a negative effect on a third party not directly involved in a transaction, and is a type of market failure. To confront the issue, economist Arthur Pigou proposed taxing the goods (in this case hydrocarbon fuels), that were the source of the externality (CO2) so as to accurately reflect the cost of the goods to society, thereby internalizing the production costs. A tax on a negative externality is called a Pigovian tax, which should equal the cost. Within Pigou's framework, the changes involved are marginal, and the size of the externality is assumed to be small enough not to distort the economy. Climate change is claimed to result in catastrophe (non-marginal) changes. "Non-marginal" means that the impact could significantly reduce the growth rate in income and welfare. The amount of resources that should be devoted to climate change mitigation is controversial. Policies designed to reduce carbon emissions could have a non-marginal impact, but are asserted to not be catastrophic. The design of a carbon tax involves two primary factors: the level of the tax, and the use of the revenue. The former is based on the social cost of carbon (SCC), which attempts to calculate the numeric cost of the externalities of carbon pollution. The precise number is the subject of debate in environmental and policy circles. A higher SCC corresponds with a higher evaluation of the costs of carbon pollution on society. Stanford University scientists have estimated the social cost of carbon to be upwards of $200 per ton. More conservative estimates pin the cost at around $50. The use of the revenue is another subject of debate in carbon tax proposals. A government may use revenue to increase its discretionary spending, or address deficits. However, such proposals often run the risk of being regressive, and sparking backlash among the public due to an increased cost of energy associated with such taxes. To avoid this and increase the popularity of a carbon tax, a government may make the carbon tax revenue-neutral. This can be done by reducing income tax proportionate to the level of the carbon tax, or by returning carbon tax revenues to citizens as a dividend. Carbon leakage happens when the regulation of emissions in one country/sector pushes those emissions to other places with less regulation. Leakage effects can be both negative (i.e., increasing the effectiveness of reducing overall emissions) and positive (reducing the effectiveness of reducing overall emissions). Negative leakages, which are desirable, can be referred to as "spill-over". According to one study, short-term leakage effects need to be judged against long-term effects.: 28 A policy that, for example, establishes carbon taxes only in developed countries might leak emissions to developing countries. However, a desirable negative leakage could occur due to reduced demand for coal, oil, and gas in developed countries, lowering prices. This could allow developing countries to replace coal with oil or gas, lowering emissions. In the long-run, however, if less polluting technologies are delayed, this substitution might have no long-term benefit. Carbon leakage is central to climate policy, given the 2030 Energy and Climate Framework and the review of the European Union's third carbon leakage list. A carbon tariff or border carbon adjustment (BCA) is an eco-tariff on embedded carbon. The aim is generally to prevent carbon leakage from states without a carbon price. Examples of imports which are high-carbon and so may be subject to a carbon tariff are electricity generated by coal-fired power stations, iron and steel from blast furnaces, and fertilizer from the Haber process. Currently, only California applies a BCA for electricity, while the European Union and the United Kingdom will apply BCAs from 2026 and 2027, respectively. Several other countries and territories with emissions pricing are considering them. Impacts Research shows that carbon taxes effectively reduce greenhouse gas emissions. Most economists assert that carbon taxes are the most efficient and effective way to curb climate change, with the least adverse economic effects. One study found that Sweden's carbon tax successfully reduced carbon dioxide emissions from transport by 11%. A 2015 British Columbia study found that the taxes reduced greenhouse gas emissions by 5–15% while having negligible overall economic effects. A 2017 British Columbia study found that industries on the whole benefited from the tax and "small but statistically significant 0.74 percent annual increases in employment" but that carbon-intensive and trade-sensitive industries were adversely affected. A 2020 study of carbon taxes in wealthy democracies showed that carbon taxes had not limited economic growth. Carbon taxes also appear to not adversely affect employment or GDP growth in Europe. Their economic impact ranges from zero to modest positive. A number of studies have found that in the absence of an increase in social benefits and tax credits, a carbon tax would hit poor households harder than rich households. Gilbert E. Metcalf disputed that carbon taxes would be regressive in the US. Carbon taxes can increase electricity prices. There is a debate about the relation between carbon pricing (like carbon emission trading and carbon tax) and climate justice. Carbon pricing can be adjusted to some principles of climate justice like polluters pay. Many proponents of climate justice object to carbon pricing. To close the gap between the two concepts, carbon pricing could put a cap on emissions, remove pollution from underserved communities, and justly divide revenues. Support and opposition Since carbon taxation was first proposed, numerous economists have described its strengths as a means of reducing CO2 pollution. This tax has been praised as "a far better way to control pollution than the present method of specific regulation." It has also been lauded for its market based simplicity. This includes a description as "the most efficient way to guide the decisions of producers and consumers", since "carbon emissions have an 'unpriced' societal cost in terms of their deleterious effects on the earth's climate." Since 2019 over 3,500 U.S. economists have signed The Economists' Statement on Carbon Dividends. This statement describes the benefits of a U.S. carbon tax along with suggestions for how it could be developed. One recommendation is to return revenues generated by a tax to the general public. The statement was originally signed by 45 Nobel Prize winning economists, former chairs of the Federal Reserve, former chairs of the Council of Economic Advisers, and former secretaries of the Treasury Department. It has been recognized as a historic example of consensus amongst economists. Ben Ho, professor of economics at Vassar College, has argued that "while carbon taxes are part of the optimal portfolio of policies to fight climate change, they are not the most important part." Carbon taxes have been opposed by a substantial proportion of the public. They have also been rejected in several elections, and in some cases reversed as opposition increased. One response has been to specifically allocate carbon tax revenues back to the public in order to garner support. Citizens' Climate Lobby is an international organization with over 500 chapters. It advocates for carbon tax legislation in the form of a progressive fee and dividend structure. NASA climatologist James E. Hansen has also spoken in favor of a revenue neutral carbon fee. In some instances knowledge about how carbon tax revenues are used can affect public support. Dedicating revenues to climate projects and compensating low income housing have been found to be popular uses of revenue. However, providing information about specific revenue uses in countries that have implemented carbon taxes has been shown to have limited effectiveness in increasing public support. A 2021 poll conducted by GlobeScan on 31 countries and territories found that 62 percent on average are supportive of a carbon tax, while only 33 percent are opposed to a carbon tax. In 28 of the 31 countries and territories listed in the poll, a majority of their populations are supportive of a carbon tax. Alternatives Carbon emission trading (also called cap and trade) is another approach. Emission levels are limited and emission permits traded among emitters. The permits can be issued via government auctions or offered without charge based on existing emissions (grandfathering). Auctions raise revenues that can be used to reduce other taxes or to fund government programs. Variations include setting price-floor and/or price-ceiling for permits. A carbon tax can be combined with trading. A cap with grandfathered permits can have an efficiency advantage since it applies to all industries. Cap and trade provides an equal incentive for all producers at the margin to reduce their emissions. This is an advantage over a tax that exempts or has reduced rates for certain sectors. Both carbon taxes and trading systems aim to reduce emissions by creating a price for emitting CO2. In the absence of uncertainty both systems will result in the efficient market quantity and price of CO2. When the environmental damage and therefore the appropriate tax of each unit of CO2 cannot be accurately calculated, a permit system may be more advantageous. In the case of uncertainty regarding the costs of CO2 abatement for firms, a tax is preferable. Permit systems regulate total emissions. In practice the limit has often been set so high that permit prices are not significant. In the first phase of the European Union Emissions Trading System, firms reduced their emissions to their allotted quantity without the purchase of any additional permits. This drove permit prices to nearly zero two years later, crashing the system and requiring reforms that would eventually appear in EUETS Phase 3. The distinction between carbon taxes and permit systems can get blurred when hybrid systems are allowed. A hybrid sets limits on price movements, potentially softening the cap. When the price gets too high, the issuing authority issues additional permits at that price. A price floor may be breached when emissions are so low that no one needs to buy a permit. Economist Gilbert Metcalf has proposed such a system, the Emissions Assurance Mechanism, and the idea, in principle, has been adopted by the Climate Leadership Council. James E. Hansen argued in 2009 that emissions trading would only make money for banks and hedge funds and allow business-as-usual for the chief carbon-emitting industries. A carbon credit is a tradable instrument (typically a virtual certificate) that conveys a claim to avoided GHG emissions or to the enhanced removal of greenhouse gas (GHG) from the atmosphere. One carbon credit represents the avoided or enhanced removal of one metric ton of carbon dioxide or its carbon dioxide-equivalent (CO2e). Carbon offsetting is the practice of using carbon credits to offset or counter an entity's greenhouse gas (GHG) inventory emissions in line with reporting programs or institutional emissions targets/goals. Carbon credit trading mechanisms (i.e., crediting programs), enable project developers to implement projects that mitigate GHGs and receive carbon credits which can be sold to interested buyers who may use the credits to claim they have offset their inventory GHG emissions. Similar to "offsetting", carbon credits that are permitted as compliance instruments within regulatory compliance markets (e.g., The European Union Emission Trading Scheme or the California Cap-n-Trade program) can be used by regulated entities to report lower emissions and achieve compliance status (with limitations around their use that vary by compliance program). Aside from "offsetting", carbon credits can also be used to make contributions toward global net zero GHG-level targets. It is an individual buyer's choice how to use, or "retire", the carbon credit. Two related taxes are emissions taxes and energy taxes. An emissions tax on greenhouse gas emissions requires individual emitters to pay a fee, charge, or tax for every tonne of greenhouse gas, while an energy tax is applied to the fuels themselves. In terms of climate change mitigation, a carbon tax is not a perfect substitute for an emissions tax. For example, a carbon tax encourages reduced fuel use, but it does not encourage emissions reduction such as carbon capture and storage. Energy taxes increase the price of energy regardless of emissions.: 416 An ad valorem energy tax is levied according to the energy content of a fuel or the value of an energy product, which may or may not be consistent with the emitted greenhouse gas amounts and their respective global warming potentials. Studies indicate that to reduce emissions by a certain amount, ad valorem energy taxes would be more costly than carbon taxes. However, although greenhouse gas emissions are an externality, using energy services may result in other negative externalities, e.g., air pollution not covered by the carbon tax (such as ammonia or fine particles). A combined carbon-energy tax may therefore be better at reducing air pollution than a carbon tax alone.[citation needed] Any of these taxes can be combined with a rebate, where the money collected by the tax is returned to qualifying parties, taxing heavy emitters and subsidizing those that emit less carbon. Because carbon taxes only target carbon dioxide, they do not target other greenhouse gasses, such as methane, which have a greater warming potential. Many countries tax fuel directly; for example, the UK imposes a hydrocarbon oil duty directly on vehicle hydrocarbon oils, including petrol and diesel fuel. While a direct tax sends a clear signal to the consumer, its efficiency at influencing consumers' fuel use has been challenged for reasons including: Vehicle fuel taxes may reduce the "rebound effect" that occurs when vehicle efficiency improves. Consumers may make additional journeys or purchase heavier and more powerful vehicles, offsetting the efficiency gains. A 2018 survey of leading economists found that 58% of the surveyed economists agreed with the assertion, "Carbon taxes are a better way to implement climate policy than cap-and-trade," 31% stated that they had no opinion or that it was uncertain, but none of the respondents disagreed. In a review study in 1996 the authors concluded that the choice between an international quota (cap) system, or an international carbon tax, remained ambiguous.: 430 Another study in 2012 compared a carbon tax, emissions trading, and command-and-control regulation at the industry level, concluding that market-based mechanisms would perform better than emission standards in achieving emission targets without affecting industrial production. Implementation Both energy and carbon taxes have been implemented in response to commitments under the United Nations Framework Convention on Climate Change. In most cases the tax is implemented in combination with exemptions. Indirect carbon prices, such as fuel taxes, are much more common than carbon taxes. In 2021, OECD reported that 67 of the 71 countries it assessed had some form of fuel tax. Only 39 had carbon taxes or ETSs. However, the use of carbon taxes is growing more quickly. In addition, several countries plan to further strengthen existing carbon taxes in the coming years, including Singapore, Canada and South Africa. Current carbon price policies, including carbon taxes, are still considered insufficient to create the kinds of changes in emissions that would be consistent with Paris Agreement goals. The International Monetary Fund, OECD, and others have stated that current fossil fuel prices generally fail to reflect environmental impacts. In Europe, many countries have imposed energy taxes or energy taxes based partly on carbon content. These include Denmark, Finland, Germany, Ireland, Italy, the Netherlands, Norway, Slovenia, Sweden, Switzerland, and the UK. None of these countries have been able to introduce a uniform carbon tax for fuels in all sectors. Denmark is the first country to include livestock emissions in their carbon tax system. During the 1990s, a carbon/energy tax was proposed at the EU level but failed due to industrial lobbying. In 2010, the European Commission considered implementing a pan-European minimum tax on pollution permits purchased under the European Union Greenhouse Gas Emissions Trading Scheme (EU ETS) in which the proposed new tax would be calculated in terms of carbon content. The suggested rate of €4 to €30 per tonne of CO2. In 1997, Costa Rica imposed a 3.5 percent carbon tax on hydrocarbon fuels. A portion of the proceeds go to the "Payment for Environmental Services" (PSA) program which gives incentives to property owners to practice sustainable development and forest conservation. Approximately 11% of Costa Rica's national territory is protected by the plan. The program now pays out roughly $15 million a year to around 8,000 property owners. In the 2008 Canadian federal election, a carbon tax proposed by Liberal Party leader Stéphane Dion, known as the Green Shift, became a central issue. It would have been revenue-neutral, balancing increased taxation on carbon with rebates. However, it proved to be unpopular and contributed to the Liberal Party's defeat, earning the lowest vote share since Confederation. The Conservative party won the election by promising to "develop and implement a North American-wide cap-and-trade system for greenhouse gases and air pollution, with implementation to occur between 2012 and 2015". In 2018, Canada enacted a revenue-neutral carbon levy starting in 2019, fulfilling Prime Minister Justin Trudeau's campaign pledge. The Greenhouse Gas Pollution Pricing Act applies only to provinces without provincial adequate carbon pricing. As of September 2020, seven of thirteen Canadian provinces and territories use the federal carbon tax while three have developed their own carbon tax programs. In December 2020, the federal government released an updated plan with a CA$15 per tonne per year increase in the carbon pricing, reaching CA$95 per tonne in 2025 and CA$170 per tonne in 2030. Quebec became the first province to introduce a carbon tax. The tax was to be imposed on energy producers starting 1 October 2007, with revenue collected used for energy-efficiency programs. The tax rate for gasoline is $CDN0.008 per liter, or about CA$3.50 per tonne of CO2 equivalent. The Liberal government claimed 80% of Canadians were receiving more money back via a carbon rebate but the tax was unpopular with many Canadians and became a political issue. In 2023, the Official Opposition refused to support a free trade bill between Canada and the Ukraine that added a new environmental chapter to "promote carbon pricing". Liberal Trade Minister Mary Ng stated, "We should applaud the Ukrainians for being able to negotiate an agreement and also fight climate change." Liberal House leader Karina Gould, argued the Tories were "abandoning Ukraine and not taking climate change seriously", and accused them of "American-style, right-wing politics". Pierre Poilievre, the leader of the Opposition, called the carbon tax stipulation "cruel" and stated, "It is disgusting, that Trudeau’s ideological obsession with taxing working-class people, seniors and suffering families has come ahead of what should have been a free trade agreement." By the end of 2024, opinion polls showed the ruling Trudeau Liberals were 20 points behind the Conservative Party of Canada, which was using the slogan "Axe the Tax" in their platform. Many Liberals, worried about projected losses in the 2025 federal election, pushed for Justin Trudeau to resign, which he eventually announced on January 6, 2025. The party former Governor of the Bank of Canada, Mark Carney, and within a few hours of being sworn in as Canada's 24th prime minister on March 14, 2025, Carney signed a declaration ending the consumer carbon tax and the rebate. Carney stated in his platform that "further measures to make up for the lost impact of the consumer carbon tax" would be implemented. Alberta Premier Danielle Smith warned of forthcoming increased industrial carbon taxes, which would be passed onto consumers without a rebate program in effect. A national carbon tax in the U.S. has been repeatedly proposed, but never enacted. For instance, on 23 July 2018, Representative Carlos Curbelo (R-FL) introduced H.R. 6463, the "Market Choice Act", a proposal for a carbon tax in which revenue is used to bolster American infrastructure and environmental solutions. The bill was introduced in the House of Representatives, but did not become law. A number of organizations are currently advancing national carbon tax proposals. To address concerns from conservatives that a carbon tax would grow government and increase cost of living, recent proposals have centered around revenue-neutrality. The Citizens' Climate Lobby (CCL), republicEn (formerly E&EI), the Climate Leadership Council (CLC), and Americans for Carbon Dividends (AFCD) support a revenue-neutral carbon tax with a border adjustment. The latter two organizations advocate for a specific framework called the Baker-Shultz Carbon Dividends Plan, which has gained national bipartisan traction since its announcement in 2017. The central principle is a gradually rising carbon tax in which all revenues are rebated as equal dividends to the American people. This plan is co-authored by and named after Republican elder-statesmen James Baker and George Shultz. It is also supported by companies including Microsoft, Pepsico, First Solar, American Wind Energy Association, Exxon Mobil, BP, and General Motors. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Or_Akiva] | [TOKENS: 985]
Contents Or Akiva Or Akiva (Hebrew: אור עקיבא, lit. 'Light of Akiva') is a city in the Haifa District of Israel, on the country's coastal plain. It is located inland from the ancient port city of Caesarea and the Mediterranean Sea, and to the north of the city of Hadera. It is 39 kilometres (24 miles) south of Haifa and 48 km (30 mi) north of Tel Aviv. In 2023 it had a population of 24,203. History Or Akiva was founded in the early 1950s as a ma’abara (transit camp) for new Jewish immigrants, the majority hailing from Morocco. It was built on the land of the depopulated Palestinian village Barrat Qisarya. In the 1990s, new immigrants from the former Soviet Union began to settle there, which led to an upswing in building and development. Demographics According to the Israel Central Bureau of Statistics (CBS), at the end of 2005 the city had a total population of 15,800, making it is the least-populous city in Israel. According to CBS, in 2001 the ethnic makeup of the city was 99.3% Jewish and other non-Arab, with no significant Arab population. There were 7,400 males and 7,900 females. The population of the city was spread out, with 33.7% 19 years of age or younger, 15.4% between 20 and 29, 20.8% between 30 and 44, 16.3% from 45 to 59, 4.1% from 60 to 64, and 9.7% 65 years of age or older. The population growth rate in 2001 was -0.1%. Economy Or Akiva is home to a number of large industrial plants, among them Dexxon (pharmaceuticals), Anna Lotan Ltd. (professional skin care), Darbox Ltd. (plastic packaging), Meprolight (gunsights and nightvision), Plasson (livestock feeders), Resonetics (medical devices) and Tyco International (electronics). Education According to CBS, there are 10 schools and 2,409 students in the city. They are spread out, as 6 elementary schools and 1,575 elementary school students, and 5 high schools and 834 high school students. 51.9% of 12th grade students were entitled to a matriculation certificate in 2001. Since 2005 gap year volunteers from Habonim Dror have volunteered in the town and surrounding areas working in schools and in extracurricular frameworks with Arab and Jewish youth. Neighborhoods Located in the north of the city, bounded by the northern industrial area to the east and road number 2 (coastal road) to the west. Today, about 4,500 residents live in the neighborhood, which is about 1,700 households. Located between the center and the south of the city bounded between Shidlovsky Boulevard to the north, Koplovich to the east and Hanasi Weizman Boulevard to the south and west. Located in the center of the city and is bounded between Shidlovsky streets in the south, Herzl-Stanley in the north, Ha'atsmaat in the east and the beach road in the west. A new neighborhood on the eastern side of Or Akiva bounded by Route 4 in the east, Shidlovsky Street in the south, in the Ben Gurion neighborhood in the west. A new neighborhood on the west side of Or Akiva is bounded by Route 2 in the west, Shadilovsky Boulevard in the north and Shikimim St. in the south. The neighborhood is characterized by residential towers. An old neighborhood located north of the Ben Gurion neighborhood and bounded by Route 2 to the west and David Elazar St. to the north. Located in the south of Or Akiva, at the western end. The neighborhood is bounded between Hanasi Weizman Boulevard and Shidlovsky Boulevard in the north, Highway 2 in the west, east of King David Boulevard in the east (Orot neighborhood), to the south the Or Yam neighborhood. Located on the eastern side of Or Akiva and bordered to the west by the Gani Rabin neighborhood, to the south by the Or Yam neighborhood and to the east by Highway 4. A new neighborhood being built on the historic lands of Baron Rothschild by the Or Akiva Municipality and the Caesarea Development Corporation. Notable people International relations Or Akiva is twinned with: References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Merav_Michaeli] | [TOKENS: 1083]
Contents Merav Michaeli Merav Michaeli (Hebrew: מרב מיכאלי; born 24 November 1966) is an Israeli politician, journalist, TV anchor, radio broadcaster, feminist and activist currently serving as a member of the Knesset for the Democrats, which was formed in 2024 by a merger of Labor and Meretz. She served as leader of the Labor Party from 2021 until 2024, and as Minister of Transport in the thirty-sixth government of Israel. Biography Michaeli was born in Petah Tikva to Ami Michaeli and Suzan Kastner, of Hungarian Jewish background. She is the granddaughter of Rudolf Kastner and also of Nehemia Michaeli who was the last secretary of the Mapam party. During her youth, Michaeli served as leader in the Israeli Scouts. In the IDF, Michaeli was a newscaster on the Army Radio. She helped establish Galgalatz and Radio Tel Aviv radio stations and would also lead Hebrew television programs focused on politics. She was a journalist and opinion columnist for the Haaretz newspaper. She also taught university classes and lectured extensively on the topics of feminism, media, and communications. In September 2012, she spoke at TED Jaffa on the theme of "paradigm shift", in which she argued that society should "cancel marriage". Political career In October 2012 Michaeli announced that she was joining the Labor Party and intended to run for inclusion on Labor's list for the 2013 Knesset elections. On 29 November 2012, she won fifth place on the Labor Party's list, and was elected to the Knesset when Labor won 15 seats. In preparation for the 2015 general election, the Labor and Hatnuah parties formed the Zionist Union alliance. Michaeli won the ninth slot on the Zionist Union list, and was elected to the Knesset as it won 24 seats. Shortly before the end of the Knesset term, the Zionist Union was dissolved, with Labor and Hatnuah sitting in the Knesset as separate parties. Michaeli was placed seventh on the Labor list for the April 2019 elections, but lost her seat as Labor was reduced to six seats. However, she returned to the Knesset in August 2019 after Stav Shaffir resigned from the legislature. On 22 April 2020, after the 2020 Israeli legislative election, the then Labor party leader Amir Peretz announced that the Labor Party would join the unity government in the Netanyahu-Gantz coalition, but Michaeli rejected sitting in the coalition under Netanyahu. She was elected to lead the Israeli Labor Party on 24 January 2021, after her predecessor, Amir Peretz, announced he would not stand for re-election. She announced, at the time, that her party would have gender equality on the party list; with a female-male rotation. In the 2021 election, the party won seven seats, becoming part of the thirty-sixth government, with Michaeli as Minister of Transport and Road Safety. On 31 December 2021, she announced that the Tel Aviv central bus station would be closed within four years, reneging her promise to close it immediately. Michaeli was re-elected to lead the Israeli Labor Party in July 2022. In the legislative election held later that year, Labor narrowly crossed the electoral threshold, receiving the bare minimum of four seats. Some blamed Michaeli's refusal to run jointly with the left wing Meretz for the latter party falling beneath the electoral threshold and enabling the formation of a new government formed by Benjamin Netanyahu. Michaeli was accused by prominent Meretz lawmaker Issawi Frej of 'delusions of grandeur'. In 2023 she was one of the active participants in the anti-judicial reform protests. She rejected an invitation from Prime Minister Netanyahu to join the compromise talks at the president's residence. On 7 December 2023 Michaeli called a press conference in which she stated her intention to hold a leadership election in April 2024 and that she would not run for another term. In February 2024, the party announced that the election would take place on 28 May. She was replaced in that election by Yair Golan. In April 2024 Michaeli called for dismantling an army unit with a history of abuses (Netzah Yehuda Battalion), saying it is killing Palestinians “for no real reason.” Personal life During the 1990s Michaeli was in a relationship with Israeli TV and radio producer and host Erez Tal. Since 2007 Michaeli's partner is television producer, host, and comedian Lior Schleien. She lives in Tel Aviv, near Schleien. In a 2018 interview, Michaeli stated in an interview that she didn't feel sorry for not having children and that "she never wanted to become a mother". Despite this claim, in August 2021, Michaeli and Schleien's son was born in the United States by surrogate pregnancy. In April 2023 Michaeli announced that their second son had been born via surrogacy. In April 2025 the couple announced the birth of their third child, Noa. References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Political_polarization_in_the_United_States] | [TOKENS: 12219]
Contents Political polarization in the United States Political polarization is a prominent component of politics in the United States. Scholars distinguish between ideological polarization (differences between the policy positions) and affective polarization (a dislike and distrust of political out-groups), both of which are apparent in the United States. In the late 20th and early 21st century, the U.S. has experienced a greater surge in ideological polarization and affective polarization than comparable democracies. Differences in political ideals and policy goals are indicative of a healthy democracy. Scholarly questions consider changes in the magnitude of political polarization over time, the extent to which polarization is a feature of American politics and society, and whether there has been a shift away from focusing on triumphs to dominating the perceived abhorrent supporters of the opposing party. Polarization among U.S. legislators is asymmetric, as it has primarily been driven by a rightward shift among Republicans in Congress. Polarization has increased since the 1970s, with rapid increases in polarization during the 2000s onwards. According to the Pew Research Center, members of both parties who have unfavorable opinions of the opposing party have doubled since 1994, while those who have very unfavorable opinions of the opposing party are at record highs as of 2022. Definition and conceptualization The Pew Research Center defines political polarization in the United States as "the vast and growing gap between liberals and conservatives, Republicans and Democrats". According to psychology professors Gordon Heltzel and Kristin Laurin, political polarization occurs when "subsets of a population adopt increasingly dissimilar attitudes toward parties and party members (i.e., affective polarization), as well as ideologies and policies (ideological polarization.)" Polarization has been defined as both a process and a state of being. A defining aspect of polarization, though not its only facet, is a bimodal distribution around conflicting points of view or philosophies. In general, defining a threshold at which an issue is "polarized" is imprecise; detecting the trend of polarization, however, (increasing, decreasing, or stable) is more straightforward. The relationship between ideological and affective polarization is complicated and contested but, generally speaking, scholars recognize an increase in both ideological and affective polarization in the United States over time. Some research suggests that affective polarization is growing more rapidly than ideological polarization and perhaps even driving it. Political scientists debate the relationship between elite-driven polarization and mass polarization (among the general public). Morris Fiorina argues that the American public is not as polarized as is often assumed, suggesting that elite polarization is wrongly imputed to the general public. Conversely, Alan Abramowitz argues that mass polarization is actually imposing polarization upon elites through the electoral process. Affective polarization is closely related to political tribalism and "us-them" thinking. There is mounting psychological evidence that humans are hardwired to display loyalty towards in-groups and hostility and distrust towards out-groups, however they are defined. One way to describe this is to say that humans evolved to be partial empathizers, ready to empathize with those from whom they can expect reciprocity, while being incredibly skeptical towards outsiders. Recent research has shown that the interplay between out-group hostility and in-group empathy can be the driver of ideological polarization. Many of our cognitive biases and failures of reason can be traced directly back to our apparent need to defend our group against threats, even when those threats consist mainly of ideas and words. Motivated reasoning and confirmation bias (or myside bias) help to explain the cognitive blindspots that lead us to dismiss or discredit challenging information while granting unwarranted credence to information that supports our pre-existing views. For example, in one study, subjects were asked to evaluate "neutral" quantitative data regarding the efficacy of skin cream. In this treatment, subjects' performance was determined simply by their numeracy, that is, their mathematical skill level. In a second treatment, subjects were presented with data concerning the efficacy of gun control laws. Partisans performed much more poorly when asked to evaluate data that challenged their pre-existing views about gun control laws. High mathematical skill levels did not prevent this. Those who had the strongest mathematical skills were best able to rationalize a false interpretation of the data that conformed with their pre-existing views. Similarly, Americans' views on gun control seem to stem almost entirely from their cultural worldview and how they position themselves in that cultural schema. Statistics do not persuade them to change their minds. Cognitive scientists Mercier and Sperber argue that human reason did not evolve in order to produce logical arguments, but rather to finesse social relationships. Within this view, the evolutionary purpose of reason is not truth, but persuasion and collaboration. These studies suggest that our social and partisan identities, often discussed in the context of identity politics, affect the ways in which we engage information, and may sometimes drive political polarization. Sociologist Daniel DellaPosta introduced the concept of "pluralistic collapse" in a 2020 paper in American Sociological Review. DellaPosta proposed that polarization was not merely a matter of people moving further apart on issues they already disagreed about. Instead, social, cultural, and political alignments have come to encompass an increasingly diverse array of opinions and attitudes. Analyzing 44 years of data from the General Social Survey, DellaPosta concluded that mass polarization had increased through a process of "belief consolidation," the collapse of previously cross-partisan alignments. Consequently, where citizens once held idiosyncratic combinations of views that cut across partisan lines, they now increasingly sort into two comprehensive worldview clusters. He likened the division to an oil spill: "...it's not just that the previously existing division is getting stronger, it's that other opinions that weren't even part of those division[s] to begin with are getting drawn in." Although most studies have focused on survey data to quantify affective polarization, social media and social network based approaches have recently been proposed to estimate the affective polarization. History Starting in the early 1830s, the country became progressively more polarized over the issue of slavery: whether it was a great wrong, a sin, or a necessary evil or even a positive good. Neither the Missouri Compromise nor the Compromise of 1850 succeeding in dealing adequately with the problem. The pro-slavery Southern states, thanks to the Three-fifths Compromise, dominated the Federal government. Congress passed repeated gag rules preventing the issue from even being discussed. The Gilded Age of the late 19th century (c. 1870 – 1900) is considered to be one of the most politically polarized periods in American history, with open political violence and highly polarized political discourse. A key event during this era was the election of 1896, which some scholars say led to an era of one-party rule, created "safe seats" for elected officials to build careers as politicians, increased party homogeneity, and increased party polarization. Political polarization was overall heightened, with Republicans strengthening their hold on industrial areas, and Democrats losing ground in the North and upper Midwest. The 1950s and 1960s were marked by high levels of political bipartisanship, the results of a post-World War II "consensus" in American politics, as well as ideological diversity within each of the two major parties. In the 1990s, House Speaker Newt Gingrich's use of "asymmetric constitutional hardball" led to increasing polarization in American politics driven primarily by the Republican Party. Media and political figures began espousing the narrative of polarization in the early 1990s, with a notable example being Pat Buchanan's speech at the 1992 Republican National Convention. In the speech, he declared a culture war for the future of the country. In 1994, the Democratic Party lost control of the House of Representatives for the first time in forty years. Congress went Republican for the first time since 1952. The narrative of political polarization became a recurring theme in the elections of 2000 and 2004. After President George W. Bush barely won reelection in 2004, English historian Simon Schama noted that the US had not been so polarized since the American Civil War, and that a more apt name might be the Divided States of America. From 1994 to 2014, the share of Americans who expressed either "consistently liberal" or "consistently conservative" opinions doubled from 10% to 21%. In 1994, the average Republican was more conservative than 70% of Democrats, compared to more conservative than 94% of Democrats in 2014. The average Democrat went from more liberal than 64% of Republicans to more liberal than 92% of Republicans during the same era. In contrast, families are becoming more politically homogenous. As of 2018, 80% of marriages had spousal alignment on party affiliation. Parent-child agreement was 75%. Both of these represent significant increases from family homogeneity in the 1960s. A 2022 study found that there had been a substantial increase since 1980 in political polarization among adolescents, driven by parental influence. A Brown University study released in 2020 found that the U.S. was polarizing faster compared to other democratic countries such as Canada, the United Kingdom, Germany, and Australia. According to Stony Brook University political scientists Yanna Krupnikov and John Barry Ryan, polarization in American politics is primarily a phenomenon among Americans who are deeply involved in politics and very expressive about their political views. Americans who are not as involved in politics are not as polarized. Other research suggests that Americans are less polarized than they think, and that there is "significant ideological overlap and agreement on policies." Ideological polarization is felt most strongly among those who are the most politically engaged, such as progressive activists and extreme conservatives. Although Americans are less ideologically polarized than they believe, they are very emotionally polarized (also known as affective polarization). In other words, "they do not like members of the other party." According to Gallup, in 2025 the percentage of Americans self-identifying as politically moderate reached a record low of 34%. Among Republicans, 77% self-identified as conservative, 18% as moderate, and 4% as liberal. Among Democrats, 55% self-identified as liberal, 34% as moderate, and 9% as conservative. In 2025, The Wall Street Journal described the American political system as having "a total breakdown in trust" between the two parties and among the general public. Politically polarizing issues In February 2020, a Pew Research Center study highlighted the current political issues that have the most partisanship. By far, addressing climate change was the most partisan issue with only 21% of Republicans considering it a top policy priority as opposed to 78% of Democrats. Issues that are also extremely partisan include protecting the environment, reforming gun policy, and bolstering the country's military strength. These differences in policy priorities emerge as both Democrats and Republicans shift their focus away from improving the economy. Since 2011, both parties have gradually placed economic stimulation and job growth lower on their priority list, with Democrats experiencing a sharper decline of importance when compared to Republicans. This is in stark contrast to the 1990s, when both Democrats and Republicans shared similar views on climate change and showed significantly more agreement. A 2017 Gallup poll identified issues where the partisan gap has significantly increased over a period of about fifteen years. For Republicans, the most significant shift was the idea that the "federal government has too much power", with 39% of Republicans agreeing with that notion in 2002 as opposed to 82% agreeing in 2016. On the Democratic side, the largest shift was increasing favorability towards Cuba, changing from 32% in 2002 to 66% in 2017. Ultimately, as partisanship continues to permeate and dominate policy, citizens who adhere and align themselves with political parties become increasingly polarized. On some issues with a wide public consensus, partisan politics still divides citizens. For instance, in 2018, even though 60% of Americans believed that the government should provide healthcare for its citizens, opinions were split among party lines with 85% of Democrats, including left-leaning independents, believing that healthcare is the government's responsibility and 68% of Republicans believing that it is not the government's responsibility. Likewise, on some prominent issues where the parties are broadly split, there is bipartisan support for specific policies. In 2020, for example, in health care, 79% of Americans think pre-existing conditions should be covered by health insurance; 60% think abortion should be broadly legal in the first trimester but only 28% in the second trimester and 13% in the third trimester. In 2020, 77% of Americans thought legal immigration is good for the country. In 2019, on gun rights, 89% support more mental health funding, 83% support closing the gun show loophole, 72% support red flag laws, and 72% support requiring gun permits when purchasing. In 2017, in the federal budget, there is 80% or more support to retain funding for veterans, infrastructure, Social Security, Medicare, and education. Political polarization shaped the public's reaction to COVID-19. A study that observed the online conversations surrounding the COVID-19 pandemic found that left-leaning individuals were more likely to criticize politicians compared to right-leaning users. Left-leaning social media accounts often shared disease prevention measures through hashtags. Right-leaning posts were more likely to spread conspiracies and retweet posts from the White House's Twitter account. The study continues to explain that, when considering geographic location, because individuals in conservative and right leaning areas are more likely to see COVID-19 as a non-threat, they are less likely to stay home and follow health guidelines. Potential causes Many factors contribute to polarization in American society. While scholars disagree about which factors carry the most explanatory weight, each of the factors below provides a partial explanation of polarization. Elites are more polarized than the general public. High-information citizens tend to hold strong opinions, whereas low-information citizens have "fewer and weaker" opinions. When it comes to politics, many citizens are low-information. In one study, 35% of American voters could be classified as low-information "know-nothings." Research on the relationship between elite and mass polarization has not settled the question of whether elite polarization drives mass polarization, or vice versa. Some studies suggest that ideological polarization among elites tends to increase affective polarization among the public. Other research suggests that elites are actually more ideologically and affectively polarized, and that it is their affective polarization that drives mass affective polarization. This implies that even though elites are better informed about politics and more ideologically consistent, they are also more emotional about politics. Several institutional and structural features of the American electoral system contribute to elite-driven polarization, by selecting more ideologically extreme candidates and rewarding antagonistic legislative behavior over compromise. These are described below. Primary elections select candidates for upcoming general elections. In many states, primaries are closed, which means that only registered party members may vote in that party's primary. Closed primaries exclude millions of independent voters. Because primary election turnout is quite low, their outcomes are generally determined by a committed core of partisans. Because closed primaries do not have to appeal to independent voters, they often produce more ideologically extreme (and hence polarized) candidates. For this reason, some good governance groups advocate for open, non-partisan primaries. Political scientist Robert Boatright has shown how ideologically extreme groups have taken on a larger role in identifying and bankrolling more extreme candidates in primary elections in recent decades, leading to more moderate incumbents "getting primaried." In 2009 and 2010, for instance, many Republican incumbents lost to more extreme Tea Party candidates in their primaries, leading to a more conservative GOP regaining the House in 2010. At the same time that primaries have drawn more media attention, elections have in general become more nationalized, that is, issues and candidates are often framed in national rather than local or regional terms. The nationalization of politics contributes to polarization by boiling local politics down to the same national, partisan issues everywhere. In 2012, some scholars argued that diverging parties has been one of the major driving forces of polarization as policy platforms have become more distant. This theory is based on recent trends in the United States Congress, where the majority party prioritizes the positions that are most aligned with its party platform and political ideology. The adoption of more ideologically distinct positions by political parties can cause polarization among both elites and the electorate. For example, after the passage of the Voting Rights Act, the number of conservative Democrats in Congress decreased, while the number of conservative Republicans increased. Within the electorate during the 1970s, Southern Democrats shifted toward the Republican Party, showing polarization among both the elites and the electorate of both main parties. In 2007, political scientists showed that politicians have an incentive to advance and support polarized positions. These argue that during the early 1990s, the Republican Party used polarizing tactics to become the majority party in the United States House of Representatives—which political scientists Thomas E. Mann and Norman Ornstein refer to as Newt Gingrich's "guerrilla war". What political scientists have found is that moderates are less likely to run than are candidates who are in line with party doctrine, otherwise known as "party fit". Other theories state politicians who cater to more extreme groups within their party tend to be more successful, helping them stay in office while simultaneously pulling their constituency toward a polar extreme. A 2012 study by Nicholson found that voters are more polarized by contentious statements from leaders of the opposing party than from the leaders of their own party. As a result, political leaders may be more likely to take polarized stances. Political fund-raisers and donors can also exert significant influence and control over legislators. Party leaders are expected to be productive fund-raisers, in order to support the party's campaigns. After Citizens United v. Federal Election Commission, special interests in the U.S. were able to greatly impact elections through increased undisclosed spending, notably through Super political action committees. Some, such as Washington Post opinion writer Robert Kaiser, argued this allowed wealthy people, corporations, unions, and other groups to push the parties' policy platforms toward ideological extremes, resulting in a state of greater polarization. Other scholars, such as Raymond J. La Raja and David L. Wiltse, note that this does not necessarily hold true for mass donors to political campaigns. These scholars argue a single donor who is polarized and contributes large sums to a campaign does not seem to usually drive a politician toward political extremes. Polarization among U.S. legislators is asymmetric, as it has primarily been driven by a substantial rightward shift among congressional Republicans since the 1970s, alongside a much smaller leftward shift among congressional Democrats, which mainly occurred in the early 2010s and mostly on social, cultural, and religious issues. Racially polarized voting is extremely high in the Southern United States. In 2014, in some Deep South states, more than 80% of White Americans voted for Republicans, nearly identical to the share of African Americans that vote for Democrats. Thomas Piketty highlighted in his book Capital and Ideology the gradual shift since World War II of those with lower educational attainment increasingly voting for the Republican Party, while those with higher educational attainment increasingly voting for the Democratic Party. In the 1948 United States presidential election, Harry S. Truman received 50% of votes from those with a high school diploma, and 30% of the vote from those with college degrees, with the latter then just 6% of the electorate. In the 1976 United States presidential election, Jimmy Carter received 54% of the votes from those with a high school diploma, and 43% from those with college degrees. In the 2020 United States presidential election, Donald Trump received 54% of the vote from those with a high school diploma, 47% of the vote from those with a Bachelor's degree, and 37% of the vote from those with a graduate degree. In 2024, according to political scientists Matt Grossmann and David A. Hopkins, the Republican Party's gains among white voters without college degrees contributed to the rise of right-wing populism. In democracies and other representative governments, citizens vote for the political actors who will represent them. Some scholars argue that political polarization reflects the public's ideology and voting preferences. Dixit and Weibull (2007) claim that political polarization is a natural and regular phenomenon. They argue that there is a link between public differences in ideology and the polarization of representatives, but that an increase in preference differences is usually temporary and ultimately results in compromise. Fernbach, Rogers, Fox and Sloman (2013) argue that it is a result of people having an exaggerated faith in their understanding of complex issues. Asking people to explain their policy preferences in detail typically resulted in more moderate views. Simply asking them to list the reasons for their preferences did not result in any such moderation. Morris P. Fiorina (2006, 2008) posits the hypothesis that polarization is a phenomenon which does not hold for the public, and instead is formulated by commentators to draw further division in government. Others, such as social psychologist Jonathan Haidt and journalists Bill Bishop and Harry Enten, instead note the growing percentage of the U.S. electorate living in "landslide counties", counties where the popular vote margin between the Democratic and Republican candidate is 20 percentage points or greater. In 1976, only 27 percent of U.S. voters lived in landslide counties, which increased to 39 percent by 1992. Nearly half of U.S. voters resided in counties that voted for George W. Bush or John Kerry by 20 percentage points or more in 2004. In 2008, 48 percent of U.S. voters lived in such counties, which increased to 50 percent in 2012 and increased further to 61 percent in 2016. In 2020, 58 percent of U.S. voters lived in landslide counties. At the same time, the 2020 U.S. presidential election marked the ninth consecutive presidential election where the victorious major party nominee did not win a popular vote majority by a double-digit margin over the losing major party nominee(s), continuing the longest sequence of such presidential elections in U.S. history that began in 1988 and in 2016 eclipsed the previous longest sequences from 1836 through 1860 and from 1876 through 1900.[note 1] Other studies indicate that cultural differences focusing on ideological movements and geographical polarization within the United States constituency is correlated with rises in overall political polarization between 1972 and 2004. Religious, ethnic, and other cultural divides within the public have often influenced the emergence of polarization. According to Layman et al. (2005), the ideological split between U.S. Republicans and Democrats also crosses into the religious cultural divide. They claim that Democrats have generally become more moderate in religious views whereas Republicans have become more traditionalist. For example, political scientists have shown that in the United States, voters who identify as Republican are more likely to vote for a strongly evangelical candidate than Democratic voters. This correlates with the rise in polarization in the United States. Another theory contends that religion does not contribute to full-group polarization, but rather, coalition and party activist polarization causes party shifts toward a political extreme. A 2020 paper studying polarization across countries found a correlation between increasing polarization and increasing ethnic diversity, both of which are happening in the United States. The impact of redistricting—potentially through gerrymandering or the manipulation of electoral borders to favor a political party—on political polarization in the United States has been found to be minimal in research by leading political scientists. The logic for this minimal effect is twofold: first, gerrymandering is typically accomplished by packing opposition voters into a minority of congressional districts in a region, while distributing the preferred party's voters over a majority of districts by a slimmer majority than otherwise would have existed. The result of this is that the number of competitive congressional districts would be expected to increase, and in competitive districts representatives have to compete with the other party for the median voter, who tends to be more ideologically moderate. Second, political polarization has also occurred in the Senate, which does not experience redistricting because Senators represent fixed geographical units, i.e. states. The argument that redistricting, through gerrymandering, would contribute to political polarization is based on the idea that new non-competitive districts created would lead to the election of extremist candidates representing the supermajority party, with no accountability to the voice of the minority. One difficulty in testing this hypothesis is to disentangle gerrymandering effects from natural geographical sorting through individuals moving to congressional districts with a similar ideological makeup to their own. Carson et al. (2007), has found that redistricting has contributed to the greater level of polarization in the House of Representatives than in the Senate, however that this effect has been "relatively modest". Politically motivated redistricting has been associated with the rise in partisanship in the U.S. House of Representatives between 1992 and 1994. Majoritarian electoral institutions have been linked to polarization. However, ending gerrymandering practices in redistricting cannot correct for increased polarization due to the growing percentage of the U.S. electorate living in "landslide counties", counties where the popular vote margin between the Democratic and Republican candidate is 20 percentage points or greater. Of the 92 U.S. House seats ranked by The Cook Political Report as swing seats in 1996 that transitioned to being non-competitive by 2016, only 17 percent came as a result of changes to district boundaries while 83 percent came from natural geographic sorting of the electorate election to election. A 2013 review concluded that there is no firm evidence that media institutions contributed to the polarization of average Americans in the last three decades of the 20th century. No evidence supports the idea that longstanding news outlets become increasingly partisan. Analyses confirm that the tone of evening news broadcasts remained unchanged from 1968 to 1996: largely centrist, with a small but constant bias towards Democratic Party positions. More partisan media pockets have emerged in blogs, podcasts, talk radio, websites, and cable news channels, which are much more likely to use insulting language, mockery, and extremely dramatic reactions, collectively referred to as "outrage". People who have strongly partisan viewpoints are more likely to watch partisan news. A 2017 study found no correlation between increased media and Internet consumption and increased political polarization, although the data did confirm a larger increase in polarization among individuals over 65 compared to those aged 18–39. A 2020 paper comparing polarization across several wealthy countries found no consistent trend, prompting Ezra Klein to reject the theory that the Internet and social media were the underlying cause of the increase in the United States. Along with political scientist Sam Abrams, social psychologist Jonathan Haidt argues that political elites in the United States became more polarized beginning in the 1990s as the Greatest Generation and the Silent Generation (fundamentally shaped by their living memories of World War I, World War II, and the Korean War) were gradually replaced with Baby boomers and Generation Jones (fundamentally shaped by their living memories of the U.S. culture war of the 1960s). Haidt argues that because of the difference in their life experience relevant to moral foundations, Baby boomers and Generation Jones may be more prone to what he calls "Manichean thinking," and along with Abrams and FIRE President Greg Lukianoff, Haidt argues that changes made by Newt Gingrich to the parliamentary procedure of the U.S. House of Representatives beginning in 1995 made the chamber more partisan. Unlike the first half of the 20th century, protests of the 1960s civil rights movement (such as the Selma to Montgomery marches in 1965) were televised, along with police brutality and urban race rioting during the latter half of the decade. In 1992, 60 percent of U.S. households held cable television subscriptions in the United States, and Haidt, Abrams, and Lukianoff argue that the expansion of cable television, and Fox News in particular since 2015 in their coverage of student activism over political correctness at colleges and universities in the United States, is one of the principal factors amplifying political polarization since the 1990s. Haidt and Lukianoff argue that the filter bubbles created by the News Feed algorithm of Facebook and other social media platforms are also one of the principal factors amplifying political polarization since 2000, when a majority of U.S. households first had at least one personal computer and then internet access in 2001. In 2002, a majority of U.S. survey respondents reported having a mobile phone. Big data algorithms are used in personalized content creation and automatization; however, this method can be used to manipulate users in various ways. The problem of misinformation is exacerbated by the educational bubble, users' critical thinking ability and news culture. In a 2015 study, 62.5% of the Facebook users were oblivious to any curation of their News Feed. Scientists have started to investigate algorithms with unexpected outcomes that may lead to antisocial political, economic, geographic, racial, or other discrimination. Facebook has remained scarce in transparency of the inner workings of the algorithms used for News Feed correlation. Algorithms use the past activities as a reference point for predicting users' taste to keep them engaged. This leads to the formation of a filter bubble that starts to refrain users from diverse information. Users are left with a skewed worldview derived from their own preferences and biases. In 2015, researchers from Facebook published a study indicating that the Facebook algorithm perpetuates an echo chamber amongst users by occasionally hiding content from individual feeds that users potentially would disagree with: for example the algorithm removed one in every 13 diverse content from news sources for self-identified liberals. In general, the results from the study indicated that the Facebook algorithm ranking system caused approximately 15% less diverse material in users' content feeds, and a 70% reduction in the click-through-rate of the diverse material. At least in the political field, Facebook has a counter-effect on being informed: in two studies from the US with a total of more than 2,000 participants, the influence of social media on the general knowledge on political issues was examined in the context of two US presidential elections. The results showed that the frequency of Facebook use was moderately negatively related to general political knowledge. This was also the case when considering demographic, political-ideological variables and previous political knowledge. According to the latter, a causal relationship is indicated: the higher the Facebook use, the more the general political knowledge declines. In 2019, social psychologist Jonathan Haidt argued that there is a "very good chance American democracy will fail, that in the next 30 years we will have a catastrophic failure of our democracy." According to a report by Oxford by researchers including sociologist Philip N. Howard, social media played a major role in political polarization in the United States, due to computational propaganda -- "the use of automation, algorithms, and big-data analytics to manipulate public life"—such as the spread of fake news and conspiracy theories. The researchers highlighted the role of the Russian Internet Research Agency in attempts to undermine democracy in the US and exacerbate existing political divisions. The most prominent methods of misinformation were ostensibly organic posts rather than ads, and influence operation activity increased after, and was not limited to, the 2016 election. During the Russian interference in the 2016 United States elections, examples of efforts included "campaigning for African American voters to boycott elections or follow the wrong voting procedures in 2016", "encouraging extreme right-wing voters to be more confrontational", and "spreading sensationalist, conspiratorial, and other forms of junk political news and misinformation to voters across the political spectrum." Sarah Kreps of Brookings Institution argue that in the wake of foreign influence operations which are nothing new but boosted by digital tools, the U.S. has had to spend exorbitantly on defensive measures "just to break even on democratic legitimacy." According to the United States House Permanent Select Committee on Intelligence, by 2018 organic content created by Russia's Internet Research Agency reached at least 126 million US Facebook users, while its politically divisive ads reached 11.4 million US Facebook users. Tweets by the IRA reached approximately 288 million American users. According to committee chair Adam Schiff, "[The Russian] social media campaign was designed to further a broader Kremlin objective: sowing discord in the U.S. by inflaming passions on a range of divisive issues. The Russians did so by weaving together fake accounts, pages, and communities to push politicized content and videos, and to mobilize real Americans to sign online petitions and join rallies and protests." In 2020, Michael McFaul, former U.S. Ambassador to Russia from 2012 to 2014, stated that he believes the U.S. has faced a democratic decline, stemming from elite polarization and damage done by President Donald Trump to trust in elections and bonds with democratic allies. McFaul states that the decline in democracy weakens national security and heavily restrains foreign policy. Portrayals of violence in the media can lead to fear of crime or terrorism or fear of "other" groups. These can appear out of proportion to their actual frequency, and due to the availability heuristic, these fears can be out of proportion to the actual threat from other groups. A change in campaigning in 2022 that has been called "both a symptom of and a contributor to the ills" of American politics is a move away from participation in debates between candidates, in "retail politicking" that has been a political "cliché ... for generations" in American politics: pressing the flesh at "diners and state fairs ... town-hall-style meetings ... where citizens get to question their elected leaders and those running to replace them". Replacing these are "safer spaces" for candidates, "partisan news outlets, fund-raisers with supporters, friendly local crowds," where reporters and their challenging questions are "muscled away". Candidates in ten of the most competitive contests in 2022 for Senate (Arizona, North Carolina, Ohio, Georgia and Wisconsin) and governor (Texas and Wisconsin) have "agreed to just one debate, where voters not long ago could have expected to watch two or three". Observers see a danger in candidates avoiding those tougher interactions cuts down on the opportunities for candidates' characters and limitations to be revealed, and for elected officials to be held accountable to those who elected them. For the politicians, it creates an artificial environment where their positions appear uniformly popular and opposing views are angrily denounced, making compromise seem risky. "They run these campaigns in bubbles to these voters who are in bubbles", said former Representative Tom Davis, a moderate Republican who won seven terms in Congress in a Northern Virginia district and headed his party's congressional campaign committee. Causes suggested for the disinterest include the fewer competitive House of Representative districts, and fewer "swing voters", making attempts to appeal to those voters not cost effective. According to journalists Lisa Lerer and Jazmine Ulloa, "the trend of avoiding the public was initially driven by Republicans" but has "seeped across party lines" so that now, Democrats also avoid voters. Evidence suggests that there is a correlation between high levels of economic inequality and increased political polarization. According to Jonathan Hopkin in 2020, decades of neoliberal policies, which made the United States "the most extreme case of the subjection of society to the brute force of the market," resulted in unprecedented levels of inequality, and combined with an unstable financial system and limited political choices, paved the way for political instability and revolt, as evidenced by the resurgence of the American left as represented by Bernie Sanders 2016 presidential campaign and the rise of an "unlikely figure" like Donald Trump to the presidency of the United States. According to a 2020 study, "polarization is more intense when unemployment and inequality are high" and "when political elites clash over cultural issues such as immigration and national identity." One common hypothesis for polarization in the United States is the end of the Cold War and a greater absence of severe security threats. A 2021 study disputed this, finding little evidence that external threats reduce polarization. Effects Potentially both a cause and effect of polarization is "demonization" of political opponents, such as accusing them not just of being wrong about certain legislation or policies but of hating their country, or the use of what are called 'devil terms' — defined by communications professor Jennifer Mercieca as "things that are so unquestionably bad that you can't have a debate about them". Some examples include the accusations that President Biden has a plan, to "flood our country with terrorists, fentanyl, child traffickers, and MS-13 gang members", and that "Under President Biden's leadership ... We face an unprecedented assault on the American way of life by the radical left" (Mary E. Miller-IL), that "Democrats are so enamored of power that they want to legalize cheating in elections," (Andy Biggs-AZ), "America-hating Socialists seek to upend the American way of life based on freedom and liberty and replace it with dictatorial government that controls every aspect of our lives" (Mo Brooks-AL). While "demonizing communication style" has been in use "for years" among "media personalities and the occasional firebrand lawmaker", its use became popular among high level politicians with the election of Donald Trump and with the 2022 election has become widespread among "the 139 House Republicans who challenged the Electoral College vote" in January 2021, according to a 2022 study of "divisive rhetoric" in 3.7 million "tweets, Facebook ads, newsletters and congressional speeches" by the New York Times. Checking the Congressional Record, the Times found Republicans have "more than quadrupled their use of divisive rhetoric" since the early 2010s. An example of the escalation in aggressive attack is Republican House leader Kevin McCarthy, who after the January 6 insurrection "implored members of his party to tone down their speech", saying, 'We all must acknowledge how our words have contributed to the discord in America ... No more name calling, us versus them.'" However in "dozens of tweets since then" McCarthy has referred to "Democrats as 'radical' leftists" who "prefer China to the United States" and are "ruining America". A "few Democrats", such as former Representative Bill Pascrell of New Jersey, also have "frequently" used "demonizing speech on Twitter". Some political scientists have warned that "factionalism is alarming because it makes compromise harder and normalizes" divisive rhetoric throughout the country. Some authors have found a correlation between polarization of political discourse and the prevalence of political violence. For instance, Rachel Kleinfeld, an expert on the rule of law and post-conflict governance, writes that political violence is extremely calculated and, while it may appear "spontaneous," it is the culmination of years of "discrimination and social segregation."[citation needed] A 2021 analysis by Kleinfeld found that despite efforts to reduce affective polarization, there is little evidence that it correlates with political violence. Hyper-partisanship can foster political violence. As polarization and partisanship grow, some individuals may resort to violence to further a political agenda, such as the January 6 United States Capitol attack. At other times, the violence may be aimed at punishing political enemies, such as the attack on Paul Pelosi. Today, most political violence emanates from the far-right, in contrast to the 1960s and 1970s. One can also find examples of political violence from the political left, such as the 2017 Congressional baseball shooting aimed at Republicans, and Floyd Corkins's attack and attempted mass murder of staff at the Family Research Council in protest of their opposition to LGBTQ+ rights. American popular culture is fascinated with the prospects of political violence. The 2024 feature film Civil War explores the possibility of wide-scale political violence in the US today. The Twitter feed #secondcivilwar offers a more ambivalent, satirical perspective. There is mixed data regarding actual support for political violence among Americans today. In 2020, political scientists found that support for political violence had grown among both Democrats and Republicans: in 2017, only 8% of both Democrats and Republicans agreed that the use of political violence is at least "a little justified" if it advances their party's political agenda, but as of September 2020, that number jumped to 33% and 36%, respectively. In 2024, according to the Polarization Research Lab, fewer than 4% of Americans support political violence. In any case, the current polarized climate may create conditions that lead to more support for political violence within the country, unless there is meaningful reform. The General Social Survey periodically asks Americans whether they trust scientists. The proportion of American conservatives who say they place "a great deal of trust" in scientists fell from 48% in 1974 to 35% in 2010 and rose again to 39% in 2018. Instead, liberals and independents report different levels of trust in science. The COVID-19 pandemic brought these differences front and center, with partisanship often being an indicator of how a citizen saw the gravity of the crisis. In the early stages of the pandemic, Republican governors often went against the advice of infectious disease experts while most of their Democratic counterparts translated the advice into policies such as stay at home orders. Similar to other polarizing topics in the United States, a person's attitude towards COVID-19 became a matter of political identity. While the crisis had very little precedent in U.S. history, reactions from both liberals and conservatives stemmed from long-held messaging cues among their parties. Conservatives responded to the anti-elite, states' rights, and small government messaging cues surrounding the virus. This then translated into avid hostility towards any measure that limited a person's autonomy (mask requirements, schools closing, lockdowns, vaccine mandates, etc.). Meanwhile, liberals' attitude towards science made them more likely to follow the guidance from institutions like the CDC and well-known medical experts, such as Dr. Anthony Fauci. Political polarization among elites is negatively correlated with legislative efficiency, which is defined by the total number of laws passed, as well as the number of "major enactments" and "key votes". Evidence suggests that political polarization of elites may more strongly affect efficiency than polarization of Congress itself, with authors hypothesizing that the personal relationships among members of Congress may enable them to reach compromises on contentiously advocated legislation, though not if elites allow no leeway for such. Negative effects of polarization on the United States Congress include increased gridlock and partisanship at the cost of quality and quantity of passed legislation. It also incentivizes stall tactics and closed rules, such as filibusters and excluding minority party members from committee deliberations. These strategies hamper transparency, oversight, and the government's ability to handle long-term domestic issues, especially those regarding the distribution of benefits. They foster animosity, as majority parties lose bipartisan and legislative coordination trying to expedite legislation to overcome them. Some scholars claim that political polarization is not so pervasive or destructive in influence, contending that partisan agreement is the historical trend in Congress and still frequent in the modern era, including on bills of political importance. Some studies have found approximately 80% of House bills passed in the modern era to have had support from both parties. The January 6 Capitol Attack and the associated election denialism among congressional Republicans contributed to a decline in bipartisan legislative collaboration in subsequent Congresses, in particular for the Republicans who voted not to certify the 2020 election. Opinions on polarization's effects on the public are mixed. Some argue that the growing polarization in government has directly contributed to political polarization in the electorate, but this is not unanimous. Some scholars argue that polarization lowers public interest in politics, party identification and voter turnout. It encourages confrontational dynamics between parties that can lower overall public trust and approval in government., and causes the public to perceive the general political debate as less civil, which can alienate voters. More polarized candidates, especially when voters aren't aware of the increase, also tend to be less representative of the public's wishes. On the other hand, others assert that elite polarization has galvanized the public's political participation in the United States, citing greater voting and nonvoting participation, engagement and investment in campaigns, and increased positive attitude toward government responsiveness. Polarized parties become more ideologically unified, furthering voter knowledge about their positions and increasing their standard to similarly aligned voters. Affective polarization has risen in the US, with members of the public likely to say that supporters of the other major political party are hypocritical, closed-minded, and selfish. Based on survey results by the American National Election Study, affective polarization has increased significantly since 1980. This was determined by the differences of views an individual had of their political party and the views they had of the other party. Americans have also gotten increasingly uncomfortable with the idea of their child marrying someone of another political party. In 1960, 4–5% of Americans said they were uncomfortable with the idea. By 2010, a third of Democrats would be upset at this outcome, and half of all Republicans. However, a recent study shows that affective polarization in Europe may not be primarily driven by outgroup derogation. As Mann and Ornstein argue, political polarization and the proliferation of media sources have "reinforce[d] tribal divisions, while enhancing a climate where facts are no longer driving the debate and deliberation, nor are they shared by the larger public." As other scholars have argued, the media often support and provoke the stall and closed rules tactics that disrupt regular policy procedure. Media can give the illusion that the electorate is more polarized than it truly is, pushing each end farther from the middle. The digital environment allows for the customization of information, with individuals never seemingly being exposed to opposing viewpoints. There is a long-standing belief that exposure to both sides of an argument will moderate political attitudes, and there is empirical evidence that voters often do self-moderate, saying that internet users do also search for news in the opposing viewpoint. The increased use of social media since 2008 has encouraged those who normally did not consume news coverage to now encounter headlines on their newsfeeds on a regular basis. The media has become more skilled about framing news stories to create the greatest outrage, regardless of their spot on the political spectrum. With the prevalence of "fake news", voters are more apt to cherry-pick between news sources as mistrust in the mainstream media rises. This mistrust stems from a number of factors. Some of which include, political micro-targeting, bots, trolls, and digital algorithms- research has only just begun to name all of the factors at play. Allowing these perpetrators of political polarization to stand in the way of democracy is the biggest hindrance to healthy party disagreement. A concern with the increasing trend of political polarization is the social stigma stemming from either side towards their perceived opposition. It contributes to the chronic lack of compromise and uncivilized discourse leading to both extremism and policy stalemates. The media takes advantage of such discord and shares anecdotal headlines meant to stoke the flames of polarization, rather than sharing generalized and subsequently tamer broad statistics. While the media are not immune to general public opinion and reduced polarization allows them to appeal to a larger audience, polarized environments make it easier for the media and interest groups to hold elected officials more accountable for their policy promises and positions, which is generally healthy for democracy. The issue of political polarization in the US has also had noticeable effects on how citizens view the democratic process. In both of the two last presidential elections, a large segment of voters among the losing party raised concerns about the fairness of the election. When Donald Trump won the 2016 election, the share of Democratic voters who were "not confident" in the election results more than doubled compared to pre-election day data (14% on October 15, 2016, versus 28% on January 28, 2017). In 2020, three-in-four Republicans doubted the fairness of the presidential election. This narrative of a stolen election was in large part driven by Trump himself, who refused to concede the election up until less than two weeks before Joe Biden's inauguration. This took place after the events of January 6, 2021, when thousands of Trump's supporters stormed the United States Capitol in an attempt to overturn the results of the election. Judicial systems can also be affected by the implications of political polarization. For the United States, in particular, polarization lowers confirmation rates of judges; In 2012, the confirmation rate of presidential circuit court appointments was approximately 50% as opposed to the above 90% rate in the late 1970s and early 1980s. More polarized parties have more aggressively blocked nominees and used tactics to hinder executive agendas. Political scientist Sarah Binder (2000) argues that "senatorial intolerance for the opposing party's nominees is itself a function of polarization." Negative consequences of this include higher vacancy rates on appellate courts, longer case-processing times and increased caseloads for judges. Voting margins have become much closer for filling vacancies on the Supreme Court. Justice Antonin Scalia was confirmed 98–0 in 1986; Ruth Bader Ginsburg was confirmed 96–3 in 1993. Samuel Alito was confirmed 58–42 in 2005, and Brett Kavanaugh was confirmed 50–48 in 2018. Political scientists argue that in highly polarized periods, nominees become less reflective of the moderate voter as "polarization impacts the appointment and ideological tenor of new federal judges." It also influences the politics of senatorial advice and consent, giving partisan presidents the power to appoint judges far to the left or right of center on the federal bench, obstructing the legitimacy of the judicial branch. Ultimately, the increasing presence of ideology in a judicial system impacts the judiciary's credibility. Polarization can generate strong partisan critiques of federal judges, which can damage the public perception of the justice system and the legitimacy of the courts as nonpartisan legal arbiters. Political polarization can undermine the reliability of the US's alliance commitments, as well as undercut its credibility as an adversary. It makes it harder for the US to maintain a stable foreign policy and credibly signal its intentions. Political polarization can undercut unified agreement on foreign policy and harm a nation's international standing; divisiveness on foreign affairs strengthens enemies, discourages allies, and destabilizes a nation's determination. Political scientists point to two primary implications of polarization with regard to the foreign policy of the United States. First, when the United States conducts relations abroad and appears divided, allies are less likely to trust its promises, enemies are more likely to predict its weaknesses, and uncertainty as to the country's position in world affairs rises. Second, elite opinion has a significant impact on the public's perception and understanding of foreign policy, a field where Americans have less prior knowledge to rely on. A 2021 study in Public Opinion Quarterly found evidence that polarization contributed to reductions in support for democratic norms. In a 2021 report Freedom House said that political polarization was a cause of democratic backsliding in the U.S. since political polarization undermines the "idea of a common national identity" and impedes solutions to governance problems. Gerrymandering was singled out as a cause for this since it creates safe seats for one party that can lead it to become more radical so its candidates can win their primary elections. Proposed solutions As polarization creates a less than ideal political climate, scholars have proposed multiple solutions to fix or mitigate the effects of the political polarization in the United States. As of 2025, polarization was higher than any point in current history, causing less collaboration and mutual understanding between Democrats and Republicans, and members of both political parties increasingly view each other in an extremely negative way. As a result, partisan politics has begun to shape the relationships individuals have with others, with 50% of Republicans and 35% of Democrats likely to surround themselves with friends who share similar political views. Towards the respective ends of the political spectrum, nearly two-thirds (63%) of consistent conservatives and about half (49%) of consistent liberals say most of their close friends share their political views. Additionally, increased animosity and distrust among American politicians and citizens can be attributed to the increased skepticism of American institutions, which is a problem that is extremely catalyzed by political polarization and may lead to democratic backsliding. Various changes to voting procedures have been proposed to reduce political polarization. Two proposed reforms would potentially move the U.S. from a two-party system to a multi-party system. A form of proportional representation would divide Congressional seats based on the percentage of people who voted for a specific political party. For instance, if Democrats won 20% of the vote, they would receive roughly 20% of the Congressional seats. Advocates of instant-runoff voting, or its multi-member equivalent, single transferable vote, say it encourages more moderation in political campaigns by allowing candidates to argue they should be the second choice for supporters of an opponent. It could potentially be used to replace the Electoral College with a less partisan popular vote. Elaine Kamarck of the Brookings Institution suggest ways to work within the two-party system, such as taking measures to increase voter turnout to elect more moderate representatives in Congress. She reasons that abolishing closed primaries may invite independents or individuals from the opposing political party to vote for a representative other than their registered party's candidate. In doing so, the strict ideological divides may subside, allowing for more moderate representatives to be elected. As a result, there would be an increasing ideological overlap in Congress and less polarization. Kamarck also proposes instituting a nationwide voting process like "California's top-two method," where there is only one general election for all political parties, and the top two candidates advance into the general election. Once again, this process is meant to elect more moderates into government, but there is no evidence that this has happened. Advocates for setting fixed terms for selection of the justices of the Supreme Court of the United States argue it will reduce the partisanship of confirmation battles if both major parties are satisfied they will have the chance to make a certain number of appointments. Shifting to a more societal-based solution, social psychologists state that more social contact with those holding opposing political views may help mitigate political polarization. Lawrence Lessig argues for citizens' assemblies to start to unwind polarization. Assemblies create a space where representatives and citizens are encouraged to discuss political topics and issues in a constructive fashion, hopefully resulting in compromise or mutual understanding. Yet, intergroup contact, as psychologists warn, must be created within specific parameters in order to create meaningful change. These boundaries, which make actual social implementation difficult, include a constant, meaningful dialogue between multiple members of each group. Constructive conversations should focus on principles, legislation, and policies and avoid inflammatory trigger words such as left and right, blue and red, and liberal and conservative. These words can make people become emotional and defensive when supporting their own side and stop listening with an open mind to what those on the other side are saying. In short, conversations can be more productive and meaningful by avoiding contrasting tribal and political identities. In Talking Sense about Politics: How to Overcome Political Polarization in Your Next Conversation, Jack Meacham encourages having conversations based on four neutral, impartial perspectives—detached, loyal, caring, and tactful—that underlie how people think about and respond to political issues. A number of groups in the U.S. actively host interpartisan discussions in an attempt to promote understanding and social cohesion. A third solution recognizes that American society, history, and political thought are more complex than what can be conveyed by only two partisan positions. Joel Garreau's The Nine Nations of North America, first published in 1981, was an early attempt to analyse such multiple positions. Colin Woodard revisited Garreau's theories in his 2011 book American Nations. Frank Bruni wrote that America was emerging from the 2016 election with four political parties: Paul Ryan Republicans, a Freedom Caucus, establishment Democrats, and an Elizabeth Warren and Bernie Sanders party. Similarly, David Brooks in 2016 identified four political parties: Trump's populist nationalism, a libertarian Freedom Caucus, a Bernie Sanders and Elizabeth Warren progressive party, and a Chuck Schumer and Nancy Pelosi Democratic establishment party. In Talking Sense about Politics: How to Overcome Political Polarization in Your Next Conversation, Jack Meacham argues that four fundamental, impartial perspectives have powered our economic and social progress and enabled Americans to better understand themselves and others. People holding the first of these four perspectives, the loyal perspective, aim to compete, be in charge, and win. The aim of people holding the second perspective, tactful, is to negotiate and get along with others. The third perspective, detached, is represented by people who want to disengage from others and work things out for themselves. People who reflect the fourth perspective, caring, aim to cooperate with and look out for others. George Packer, in Last Best Hope: America in Crisis and Renewal, also argues that America can best be understood not as two polarities but instead as four American narratives: There are eight social classes in America, according to David Brooks. The Pew Research Center's political typology, based on a survey of 10,221 adults in July 2021, includes nine groups. There are substantial divisions within both the Democratic and Republican parties. Outsider left, ambivalent right, and stressed sideliners have low interest in politics and low rates of voting. Some commentators propose accommodating partisan differences by taking advantage of federalism and moving more authority away from the federal government and into state and local governments. Ezra Klein proposes that having clear differences between the two main parties gives voters a better choice than having two political parties that have mostly the same views. But he suggests reducing the negative consequences of partisanship by eliminating "ticking time bombs" like fights over raising the federal debt ceiling. Various editorials have proposed that states of the U.S. secede and then form federations only with states that have voted for the same political party. These editorials note the increasingly polarized political strife in the U.S. between Republican voters and Democratic voters. They propose partition of the U.S. as a way of allowing both groups to achieve their policy goals while reducing the chances of civil war.[better source needed] Red states and blue states are states that typically vote for the Republican and Democratic parties, respectively. A 2021 poll found that 52% of Trump voters and 41% of Biden voters support partitioning the United States into multiple countries based on political party lines. A different poll that same year grouped the United States into five geographic regions, and found that 37% of Americans favored secession of their own region. 44% of Americans in the South favored secession, with Republican support at 66%; while Democratic support was 47% in the Pacific states. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Bo%C3%B6tes] | [TOKENS: 7955]
Contents Boötes Boötes (/boʊˈoʊtiːz/ boh-OH-teez) is a constellation in the northern sky, located between 0° and +60° declination, and 13 and 16 hours of right ascension on the celestial sphere. The name comes from Latin: Boōtēs, which comes from Ancient Greek: Βοώτης, romanized: Boṓtēs 'herdsman' or 'plowman' (literally, 'ox-driver'; from βοῦς boûs 'cow'). One of the 48 constellations described by the 2nd-century astronomer Ptolemy, Boötes is now one of the 88 modern constellations. It contains the fourth-brightest star in the night sky, the orange giant Arcturus. Epsilon Boötis, or Izar, is a colourful multiple star popular with amateur astronomers. Boötes is home to many other bright stars, including eight above the fourth magnitude and an additional 21 above the fifth magnitude, making a total of 29 stars easily visible to the naked eye. History and mythology In ancient Babylon, the stars of Boötes were known as SHU.PA. They were apparently depicted as the god Enlil, who was the leader of the Babylonian pantheon and special patron of farmers. Boötes may have been represented by the animal foreleg constellation in ancient Egypt, resembling that of an ox sufficiently to have been originally proposed as the "foreleg of ox" by Berio. Homer mentions Boötes in the Odyssey as a celestial reference for navigation, describing it as "late-setting" or "slow to set". Exactly whom Boötes is supposed to represent in Greek mythology is not clear. According to one version, he was a son of Demeter, Philomenus, twin brother of Plutus, a plowman who drove the oxen in the constellation Ursa Major. This agrees with the constellation's name. The ancient Greeks saw the asterism now called the "Big Dipper" or "Plough" as a cart with oxen. Some myths say that Boötes invented the plow and was memorialized for his ingenuity as a constellation. Another myth associated with Boötes by Hyginus is that of Icarius, who was schooled as a grape farmer and winemaker by Dionysus. Icarius made wine so strong that those who drank it appeared poisoned, which caused shepherds to avenge their supposedly poisoned friends by killing Icarius. Maera, Icarius' dog, brought his daughter Erigone to her father's body, whereupon both she and the dog died by suicide. Zeus then chose to honor all three by placing them in the sky as constellations: Icarius as Boötes, Erigone as Virgo, and Maera as Canis Major or Canis Minor. Following another reading, the constellation is identified with Arcas and also referred to as Arcas and Arcturus, son of Zeus and Callisto. Arcas was brought up by his maternal grandfather Lycaon, to whom one day Zeus went and had a meal. To verify that the guest was really the king of the gods, Lycaon killed his grandson and prepared a meal made from his flesh. Zeus noticed and became very angry, transforming Lycaon into a wolf and giving life back to his son. In the meantime Callisto had been transformed into a she-bear by Zeus's wife Hera, who was angry at Zeus's infidelity. This is corroborated by the Greek name for Boötes, Arctophylax, which means "Bear Watcher". Callisto, in the form of a bear was almost killed by her son, who was out hunting. Zeus rescued her, taking her into the sky where she became Ursa Major, "the Great Bear". Arcturus, the name of the constellation's brightest star, comes from the Greek word meaning "guardian of the bear". Sometimes Arcturus is depicted as leading the hunting dogs of nearby Canes Venatici and driving the bears of Ursa Major and Ursa Minor. Several former constellations were formed from stars now included in Boötes. Quadrans Muralis, the Quadrant, was a constellation created near Beta Boötis from faint stars. It was designated in 1795 by Jérôme Lalande, an astronomer who used a quadrant to perform detailed astronometric measurements. Lalande worked with Nicole-Reine Lepaute and others to predict the 1758 return of Halley's Comet. Quadrans Muralis was formed from the stars of eastern Boötes, western Hercules and Draco. It was originally called Le Mural by Jean Fortin in his 1795 Atlas Céleste; it was not given the name Quadrans Muralis until Johann Bode's 1801 Uranographia. The constellation was quite faint, with its brightest stars reaching the 5th magnitude. Mons Maenalus, representing the Maenalus mountains, was created by Johannes Hevelius in 1687 at the foot of the constellation's figure. The mountain was named for the son of Lycaon, Maenalus. The mountain, one of Diana's hunting grounds, was also holy to Pan. The stars of Boötes were incorporated into many different Chinese constellations. Arcturus was part of the most prominent of these, variously designated as the celestial king's throne (Tian Wang) or the Blue Dragon's horn (Daijiao); the name Daijiao, meaning "great horn", is more common. Arcturus was given such importance in Chinese celestial mythology because of its status marking the beginning of the lunar calendar, as well as its status as the brightest star in the northern night sky.[citation needed] Two constellations flanked Daijiao: Yousheti to the right and Zuosheti to the left; they represented companions that orchestrated the seasons. Zuosheti was formed from modern Zeta, Omicron and Pi Boötis, while Yousheti was formed from modern Eta, Tau and Upsilon Boötis. Dixi, the Emperor's ceremonial banquet mat, was north of Arcturus, consisting of the stars 12, 11 and 9 Boötis. Another northern constellation was Qigong, the Seven Dukes, which mostly straddled the Boötes-Hercules border. It included either Delta Boötis or Beta Boötis as its terminus. The other Chinese constellations made up of the stars of Boötes existed in the modern constellation's north; they are all representations of weapons. Tianqiang, the spear, was formed from Iota, Kappa and Theta Boötis; Genghe, variously representing a lance or shield, was formed from Epsilon, Rho and Sigma Boötis. There were also two weapons made up of a singular star. Xuange, the halberd, was represented by Lambda Boötis, and Zhaoyao, either the sword or the spear, was represented by Gamma Boötis. Two Chinese constellations have an uncertain placement in Boötes. Kangchi, the lake, was placed south of Arcturus, though its specific location is disputed. It may have been placed entirely in Boötes, on either side of the Boötes-Virgo border, or on either side of the Virgo-Libra border. The constellation Zhouding, a bronze tripod-mounted container used for food, was sometimes cited as the stars 1, 2 and 6 Boötis. However, it has also been associated with three stars in Coma Berenices. Boötes is also known to Native American cultures. In Yup'ik language, Boötes is Taluyaq, literally "fish trap," and the funnel-shaped part of the fish trap is known as Ilulirat. Characteristics Boötes is a constellation bordered by Virgo to the south, Coma Berenices and Canes Venatici to the west, Ursa Major to the northwest, Draco to the northeast, and Hercules, Corona Borealis and Serpens Caput to the east. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Boo". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 16 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between 13h 36.1m and 15h 49.3m , while the declination coordinates stretch from +7.36° to +55.1°. Covering 907 square degrees, Boötes culminates at midnight around 2 May and ranks 13th in area. Colloquially, its pattern of stars has been likened to a kite or ice cream cone. However, depictions of Boötes have varied historically. Aratus described him circling the north pole, herding the two bears. Later ancient Greek depictions, described by Ptolemy, have him holding the reins of his hunting dogs (Canes Venatici) in his left hand, with a spear, club, or staff in his right hand. After Hevelius introduced Mons Maenalus in 1681, Boötes was often depicted standing on the Peloponnese mountain. By 1801, when Johann Bode published his Uranographia, Boötes had acquired a sickle, which was also held in his left hand. The placement of Arcturus has also been mutable through the centuries. Traditionally, Arcturus lay between his thighs, as Ptolemy depicted him. However, Germanicus Caesar deviated from this tradition by placing Arcturus "where his garment is fastened by a knot". Features In his Uranometria, Johann Bayer used the Greek letters alpha through to omega and then A to k to label what he saw as the most prominent 35 stars in the constellation, with subsequent astronomers splitting Kappa, Mu, Nu and Pi as two stars each. Nu is also the same star as Psi Herculis. John Flamsteed numbered 54 stars for the constellation. Located 36.7 light-years from Earth, Arcturus, or Alpha Boötis, is the brightest star in Boötes and the fourth-brightest star in the sky at an apparent magnitude of −0.05; It is also the brightest star north of the celestial equator, just shading out Vega and Capella. Its name comes from the Greek for "bear-keeper". An orange giant of spectral class K1.5III, Arcturus is an ageing star that has exhausted its core supply of hydrogen and cooled and expanded to a diameter of 27 solar diameters, equivalent to approximately 32 million kilometers. Though its mass is approximately one solar mass (M☉), Arcturus shines with 133 times the luminosity of the Sun (L☉). Bayer located Arcturus above the Herdman's left knee in his Uranometria. Nearby Eta Boötis, or Muphrid, is the uppermost star denoting the left leg. It is a 2.68-magnitude star 37 light-years distant with a spectral class of G0IV, indicating it has just exhausted its core hydrogen and is beginning to expand and cool. It is 9 times as luminous as the Sun and has 2.7 times its diameter. Analysis of its spectrum reveals that it is a spectroscopic binary. Muphrid and Arcturus lie only 3.3 light-years away from each other. Viewed from Arcturus, Muphrid would have a visual magnitude of −2½, while Arcturus would be around visual magnitude −4½ when seen from Muphrid. Marking the herdsman's head is Beta Boötis, or Nekkar, a yellow giant of magnitude 3.5 and spectral type G8IIIa. Like Arcturus, it has expanded and cooled off the main sequence—likely to have lived most of its stellar life as a blue-white B-type main sequence star. Its common name comes from the Arabic phrase for "ox-driver". It is 219 light-years away and has a luminosity of 58 L☉. Located 86 light-years distant, Gamma Boötis, or Seginus, is a white giant star of spectral class A7III, with a luminosity 34 times and diameter 3.5 times that of the Sun. It is a Delta Scuti variable, ranging between magnitudes 3.02 and 3.07 every 7 hours. These stars are short period (six hours at most) pulsating stars that have been used as standard candles and as subjects to study asteroseismology. Delta Boötis is a wide double star with a primary of magnitude 3.5 and a secondary of magnitude 7.8. The primary is a yellow giant that has cooled and expanded to 10.4 times the diameter of the Sun. Of spectral class G8IV, it is around 121 light-years away, while the secondary is a yellow main sequence star of spectral type G0V. The two are thought to take 120,000 years to orbit each other. Mu Boötis, known as Alkalurops, is a triple star popular with amateur astronomers. It has an overall magnitude of 4.3 and is 121 light-years away. Its name is from the Arabic phrase for "club" or "staff". The primary appears to be of magnitude 4.3 and is blue-white. The secondary appears to be of magnitude 6.5, but is actually a close double star itself with a primary of magnitude 7.0 and a secondary of magnitude 7.6. The secondary and tertiary stars have an orbital period of 260 years. The primary has an absolute magnitude of 2.6 and is of spectral class F0. The secondary and tertiary stars are separated by 2 arcseconds; the primary and secondary are separated by 109.1 arcseconds at an angle of 171 degrees. Nu Boötis is an optical double star. The primary is an orange giant of magnitude 5.0 and the secondary is a white star of magnitude 5.0. The primary is 870 light-years away and the secondary is 430 light-years. Epsilon Boötis, also known as Izar or Pulcherrima, is a close triple star popular with amateur astronomers and the most prominent binary star in Boötes. The primary is a yellow- or orange-hued magnitude 2.5 giant star, the secondary is a magnitude 4.6 blue-hued main-sequence star, and the tertiary is a magnitude 12.0 star. The system is 210 light-years away. The name "Izar" comes from the Arabic word for "girdle" or "loincloth", referring to its location in the constellation. The name "Pulcherrima" comes from the Latin phrase for "most beautiful", referring to its contrasting colors in a telescope. The primary and secondary stars are separated by 2.9 arcseconds at an angle of 341 degrees; the primary's spectral class is K0 and it has a luminosity of 200 L☉. To the naked eye, Izar has a magnitude of 2.37. Nearby Rho and Sigma Boötis denote the herdsman's waist. Rho is an orange giant of spectral type K3III located around 160 light-years from Earth. It is ever so slightly variable, wavering by 0.003 of a magnitude from its average of 3.57. Sigma, a yellow-white main-sequence star of spectral type F3V, is suspected of varying in brightness from 4.45 to 4.49. It is around 52 light-years distant. Traditionally known as Aulād al Dhiʼbah (أولاد الضباع – aulād al dhiʼb), "the Whelps of the Hyenas", Theta, Iota, Kappa and Lambda Boötis (or Xuange) are a small group of stars in the far north of the constellation. The magnitude 4.05 Theta Boötis has a spectral type of F7 and an absolute magnitude of 3.8. Iota Boötis is a triple star with a primary of magnitude 4.8 and spectral class of A7, a secondary of magnitude 7.5, and a tertiary of magnitude 12.6. The primary is 97 light-years away. The primary and secondary stars are separated by 38.5 arcseconds, at an angle of 33 degrees. The primary and tertiary stars are separated by 86.7 arcseconds at an angle of 194 degrees. Both the primary and tertiary appear white in a telescope, but the secondary appears yellow-hued. Kappa Boötis is another wide double star. The primary is 155 light-years away and has a magnitude of 4.5. The secondary is 196 light-years away and has a magnitude of 6.6. The two components are separated by 13.4 arcseconds, at an angle of 236 degrees. The primary, with spectral class A7, appears white and the secondary appears bluish. An apparent magnitude 4.18 type A0p star, Lambda Boötis is the prototype of a class of chemically peculiar stars, only some of which pulsate as Delta Scuti-type stars. The distinction between the Lambda Boötis stars as a class of stars with peculiar spectra, and the Delta Scuti stars whose class describes pulsation in low-overtone pressure modes, is an important one. While many Lambda Boötis stars pulsate and are Delta Scuti stars, not many Delta Scuti stars have Lambda Boötis peculiarities, since the Lambda Boötis stars are a much rarer class whose members can be found both inside and outside the Delta Scuti instability strip. Lambda Boötis stars are dwarf stars that can be either spectral class A or F. Like BL Boötis-type stars they are metal-poor. Scientists have had difficulty explaining the characteristics of Lambda Boötis stars, partly because only around 60 confirmed members exist, but also due to heterogeneity in the literature. Lambda has an absolute magnitude of 1.8. There are two dimmer F-type stars, magnitude 4.83 12 Boötis, class F8; and magnitude 4.93 45 Boötis, class F5. Xi Boötis is a G8 yellow dwarf of magnitude 4.55, and absolute magnitude is 5.5. Two dimmer G-type stars are magnitude 4.86 31 Boötis, class G8, and magnitude 4.76 44 Boötis, class G0. Of apparent magnitude 4.06, Upsilon Boötis has a spectral class of K5 and an absolute magnitude of −0.3. Dimmer than Upsilon Boötis is magnitude 4.54 Phi Boötis, with a spectral class of K2 and an absolute magnitude of −0.1. Just slightly dimmer than Phi at magnitude 4.60 is O Boötis, which, like Izar, has a spectral class of K0. O Boötis has an absolute magnitude of 0.2. The other four dim stars are magnitude 4.91 6 Boötis, class K4; magnitude 4.86 20 Boötis, class K3; magnitude 4.81 Omega Boötis, class K4; and magnitude 4.83 A Boötis, class K1. There is one bright B-class star in Boötes; magnitude 4.93 Pi1 Boötis, also called Alazal. It has a spectral class of B9 and is 40 parsecs from Earth. There is also one M-type star, magnitude 4.81 34 Boötis. It is of class gM0. Besides Pulcherrima and Alkalurops, there are several other binary stars in Boötes: Two of the brighter Mira-type variable stars in the constellation are R and S Boötis. Both are red giants that range greatly in magnitude—from 6.2 to 13.1 over 223.4 days, and 7.8 to 13.8 over a period of 270.7 days, respectively. Also red giants, V and W Boötis are semi-regular variable stars that range in magnitude from 7.0 to 12.0 over a period of 258 days, and magnitude 4.7 to 5.4 over 450 days, respectively. BL Boötis is the prototype of its class of pulsating variable stars, the anomalous Cepheids. These stars are somewhat similar to Cepheid variables, but they do not have the same relationship between their period and luminosity. Their periods are similar to RRAB variables; however, they are far brighter than these stars. BL Boötis is a member of the cluster NGC 5466. Anomalous Cepheids are metal poor and have masses not much larger than the Sun's, on average, 1.5 M☉. BL Boötis type stars are a subtype of RR Lyrae variables. T Boötis was a nova observed in April 1860 at a magnitude of 9.7. It has never been observed since, but that does not preclude the possibility of it being a highly irregular variable star or a recurrent nova. Extrasolar planets have been discovered encircling ten stars in Boötes as of 2012. Tau Boötis is orbited by a large planet, discovered in 1999. The host star itself is a magnitude 4.5 star of type F7V, 15.6 parsecs from Earth. It has a mass of 1.3 M☉ and a radius of 1.331 solar radii (R☉); a companion, GJ527B, orbits at a distance of 240 AU. Tau Boötis b, the sole planet discovered in the system, orbits at a distance of 0.046 AU every 3.31 days. Discovered through radial velocity measurements, it has a mass of 5.95 Jupiter masses (MJ). This makes it a hot Jupiter. The host star and planet are tidally locked, meaning that the planet's orbit and the star's particularly high rotation are synchronized. Furthermore, a slight variability in the host star's light may be caused by magnetic interactions with the planet. Carbon monoxide is present in the planet's atmosphere. Tau Boötis b does not transit its star, rather, its orbit is inclined 46 degrees. Like Tau Boötis b, HAT-P-4b is also a hot Jupiter. It is noted for orbiting a particularly metal-rich host star and being of low density. Discovered in 2007, HAT-P-4 b has a mass of 0.68 MJ and a radius of 1.27 RJ. It orbits every 3.05 days at a distance of 0.04 AU. HAT-P-4, the host star, is an F-type star of magnitude 11.2, 310 parsecs from Earth. It is larger than the Sun, with a mass of 1.26 M☉ and a radius of 1.59 R☉. Boötes is also home to multiple-planet systems. HD 128311 is the host star for a two-planet system, consisting of HD 128311 b and HD 128311 c, discovered in 2002 and 2005, respectively. HD 128311 b is the smaller planet, with a mass of 2.18 MJ; it was discovered through radial velocity observations. It orbits at almost the same distance as Earth, at 1.099 AU; however, its orbital period is significantly longer at 448.6 days. The larger of the two, HD 128311 c, has a mass of 3.21 MJ and was discovered in the same manner. It orbits every 919 days inclined at 50°, and is 1.76 AU from the host star. The host star, HD 128311, is a K0V-type star located 16.6 parsecs from Earth. It is smaller than the Sun, with a mass of 0.84 M☉ and a radius of 0.73 R☉; it also appears below the threshold of naked-eye visibility at an apparent magnitude of 7.51. There are several single-planet systems in Boötes. HD 132406 is a Sun-like star of spectral type G0V with an apparent magnitude of 8.45, 231.5 light-years from Earth. It has a mass of 1.09 M☉ and a radius of 1 R☉. The star is orbited by a gas giant, HD 132406 b, discovered in 2007. HD 132406 orbits 1.98 AU from its host star with a period of 974 days and has a mass of 5.61 MJ. The planet was discovered by the radial velocity method. HD 131496 is also encircled by one planet, HD 131496 b. The star is of type K0 and is located 110 parsecs from Earth; it appears at a visual magnitude of 7.96. It is significantly larger than the Sun, with a mass of 1.61 M☉ and a radius of 4.6 solar radii. Its one planet, discovered in 2011 by the radial velocity method, has a mass of 2.2 MJ; its radius is as yet undetermined. HD 131496 b orbits at a distance of 2.09 AU with a period of 883 days. Another single planetary system in Boötes is the HD 132563 system, a triple star system. The parent star, technically HD 132563B, is a star of magnitude 9.47, 96 parsecs from Earth. It is almost exactly the size of the Sun, with the same radius and a mass only 1% greater. Its planet, HD 132563B b, was discovered in 2011 by the radial velocity method. 1.49 MJ, it orbits 2.62 AU from its star with a period of 1544 days. Its orbit is somewhat elliptical, with an eccentricity of 0.22. HD 132563B b is one of very few planets found in triple star systems; it orbits the isolated member of the system, which is separated from the other components, a spectroscopic binary, by 400 AU. Also discovered through the radial velocity method, albeit a year earlier, is HD 136418 b, a two-Jupiter-mass planet that orbits the star HD 136418 at a distance of 1.32 AU with a period of 464.3 days. Its host star is a magnitude 7.88 G5-type star, 98.2 parsecs from Earth. It has a radius of 3.4 R☉ and a mass of 1.33 M☉. WASP-14 b is one of the most massive and dense exoplanets known, with a mass of 7.341 MJ and a radius of 1.281 RJ. Discovered via the transit method, it orbits 0.036 AU from its host star with a period of 2.24 days. WASP-14 b has a density of 4.6 grams per cubic centimeter, making it one of the densest exoplanets known. Its host star, WASP-14, is an F5V-type star of magnitude 9.75, 160 parsecs from Earth. It has a radius of 1.306 R☉ and a mass of 1.211 M☉. It also has a very high proportion of lithium. Boötes is in a part of the celestial sphere facing away from the plane of our home Milky Way galaxy, and so does not have open clusters or nebulae. Instead, it has one bright globular cluster and many faint galaxies. The globular cluster NGC 5466 has an overall magnitude of 9.1 and a diameter of 11 arcminutes. It is a very loose globular cluster with fairly few stars and may appear as a rich, concentrated open cluster in a telescope. NGC 5466 is classified as a Shapley–Sawyer Concentration Class 12 cluster, reflecting its sparsity. Its fairly large diameter means that it has a low surface brightness, so it appears far dimmer than the catalogued magnitude of 9.1 and requires a large amateur telescope to view. Only approximately 12 stars are resolved by an amateur instrument. Boötes has two bright galaxies. NGC 5248 (Caldwell 45) is a type Sc galaxy (a variety of spiral galaxy) of magnitude 10.2. It measures 6.5 by 4.9 arcminutes. Fifty million light-years from Earth, NGC 5248 is a member of the Virgo Cluster of galaxies; it has dim outer arms and obvious H II regions, dust lanes and young star clusters. NGC 5676 is another type Sc galaxy of magnitude 10.9. It measures 3.9 by 2.0 arcminutes. Other galaxies include NGC 5008, a type Sc emission-line galaxy, NGC 5548, a type S Seyfert galaxy, NGC 5653, a type S HII galaxy, NGC 5778 (also classified as NGC 5825), a type E galaxy that is the brightest of its cluster, NGC 5886, and NGC 5888, a type SBb galaxy. NGC 5698 is a barred spiral galaxy, notable for being the host of the 2005 supernova SN 2005bc, which peaked at magnitude 15.3. Further away lies the 250-million-light-year-diameter Boötes Void, a huge space largely empty of galaxies. Discovered by Robert Kirshner and colleagues in 1981, it is roughly 700 million light-years from Earth. Beyond it and within the bounds of the constellation, lie two superclusters at around 830 million and 1 billion light-years distant. The Hercules–Corona Borealis Great Wall, the largest-known structure in the Universe, covers a significant part of Boötes. Boötes is home to the Quadrantid meteor shower, the most prolific annual meteor shower. It was discovered in January 1835 and named in 1864 by Alexander Herschel. The radiant is located in northern Boötes near Kappa Boötis, in its namesake former constellation of Quadrans Muralis. Quadrantid meteors are dim, but have a peak visible hourly rate of approximately 100 per hour on January 3–4. The zenithal hourly rate of the Quadrantids is approximately 130 meteors per hour at their peak; it is also a very narrow shower. The Quadrantids are notoriously difficult to observe because of a low radiant and often inclement weather. The parent body of the meteor shower has been disputed for decades; however, Peter Jenniskens has proposed 2003 EH1, a minor planet, as the parent. 2003 EH1 may be linked to C/1490 Y1, a comet previously thought to be a potential parent body for the Quadrantids. 2003 EH1 is a short-period comet of the Jupiter family; 500 years ago, it experienced a catastrophic breakup event. It is now dormant. The Quadrantids had notable displays in 1982, 1985 and 2004. Meteors from this shower often appear to have a blue hue and travel at a moderate speed of 41.5–43 kilometers per second. On April 28, 1984, a remarkable outburst of the normally placid Alpha Bootids was observed by visual observer Frank Witte from 00:00 to 2:30 UTC. In a 6 cm telescope, he observed 433 meteors in a field of view near Arcturus with a diameter of less than 1°. Peter Jenniskens comments that this outburst resembled a "typical dust trail crossing". The Alpha Bootids normally begin on April 14, peaking on April 27 and 28, and finishing on May 12. Its meteors are slow-moving, with a velocity of 20.9 kilometers per second. They may be related to Comet 73P/Schwassmann–Wachmann 3, but this connection is only theorized. The June Bootids, also known as the Iota Draconids, is a meteor shower associated with the comet 7P/Pons–Winnecke, first recognized on May 27, 1916, by William F. Denning. The shower, with its slow meteors, was not observed prior to 1916 because Earth did not cross the comet's dust trail until Jupiter perturbed Pons–Winnecke's orbit, causing it to come within 0.03 AU (4.5 million km; 2.8 million mi) of Earth's orbit the first year the June Bootids were observed. In 1982, E. A. Reznikov discovered that the 1916 outburst was caused by material released from the comet in 1819. Another outburst of the June Bootids was not observed until 1998, because Comet Pons–Winnecke's orbit was not in a favorable position. However, on June 27, 1998, an outburst of meteors radiating from Boötes, later confirmed to be associated with Pons-Winnecke, was observed. They were incredibly long-lived, with trails of the brightest meteors lasting several seconds at times. Many fireballs, green-hued trails, and even some meteors that cast shadows were observed throughout the outburst, which had a maximum zenithal hourly rate of 200–300 meteors per hour. Two Russian astronomers determined in 2002 that material ejected from the comet in 1825 was responsible for the 1998 outburst. Ejecta from the comet dating to 1819, 1825 and 1830 was predicted to enter Earth's atmosphere on June 23, 2004. The predictions of a shower less spectacular than the 1998 showing were borne out in a display that had a maximum zenithal hourly rate of 16–20 meteors per hour that night. The June Bootids are not expected to have another outburst in the next 50 years. Typically, only 1–2 dim, very slow meteors are visible per hour; the average June Bootid has a magnitude of 5.0. It is related to the Alpha Draconids and the Bootids-Draconids. The shower lasts from June 27 to July 5, with a peak on the night of June 28. The June Bootids are classified as a class III shower (variable), and has an average entry velocity of 18 kilometers per second. Its radiant is located 7 degrees north of Beta Boötis. The Beta Bootids is a weak shower that begins on January 5, peaks on January 16, and ends on January 18. Its meteors travel at 43 km/s. The January Bootids is a short, young meteor shower that begins on January 9, peaks from January 16 to January 18, and ends on January 18. The Phi Bootids is another weak shower radiating from Boötes. It begins on April 16, peaks on April 30 and May 1, and ends on May 12. Its meteors are slow-moving, with a velocity of 15.1 km/s. They were discovered in 2006. The shower's peak hourly rate can be as high as six meteors per hour. Though named for a star in Boötes, the Phi Bootid radiant has moved into Hercules. The meteor stream is associated with three different asteroids: 1620 Geographos, 2062 Aten and 1978 CA. The Lambda Bootids, part of the Bootid-Coronae Borealid Complex, are a weak annual shower with moderately fast meteors; 41.75 km/s. The complex includes the Lambda Bootids, as well as the Theta Coronae Borealids and Xi Coronae Borealids. All of the Bootid-Coronae Borealid showers are Jupiter family comet showers; the streams in the complex have highly inclined orbits. There are several minor showers in Boötes, some of whose existence is yet to be verified. The Rho Bootids radiate from near the namesake star, and were hypothesized in 2010. The average Rho Bootid has an entry velocity of 43 km/s. It peaks in November and lasts for three days. The Rho Bootid shower is part of the SMA complex, a group of meteor showers related to the Taurids, which is in turn linked to the comet 2P/Encke. However, the link to the Taurid shower remains unconfirmed and may be a chance correlation. Another such shower is the Gamma Bootids, which were hypothesized in 2006. Gamma Bootids have an entry velocity of 50.3 km/s. The Nu Bootids, hypothesized in 2012, have faster meteors, with an entry velocity of 62.8 km/s. See also References Citations References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Whac-A-Mole] | [TOKENS: 1784]
Contents Whac-A-Mole Whac-A-Mole is a Japanese arcade game that was created in 1975 by the amusements manufacturer TOGO in Japan, where it was originally known as Mogura Taiji (モグラ退治; "Mole Buster") or Mogura Tataki (モグラたたき; "Mole Smash"). A typical Whac-A-Mole machine consists of a waist-level cabinet with a play area and display screen, and a large, soft mallet. Five to eight holes in the play area top are filled with small, plastic, cartoonish moles, or other characters, which pop up at random. Points are scored by, as the name suggests, whacking each mole as it appears. The faster the reaction, the higher the score. Play The cabinet has a three-digit readout of the current player's score and, on later models, a "best score of the day" readout. The mallet is usually attached to the game by a rope to prevent it from being lost or stolen. Current versions of Whac-A-Mole include three displays for Bonus Score, High Score, and the current game score. Home versions, distributed by Bob's Space Racers, have one display with the current score. If the player does not strike a mole within a certain time or with enough force, it eventually sinks back into its hole with no score. Although gameplay starts out slow enough for most people to hit all of the moles that rise, it gradually increases in speed, with each mole spending less time exposed and with more moles exposed at once. After a designated time limit, the game ends, regardless of the player's score. The final score is based on the number of moles the player struck. In addition to the single-player game described above, there is a multi-player game, most often found at amusement parks. In this version, there is a large bank of individual Whac-A-Mole games linked together, and the goal is to be the first player to reach a designated score (rather than hitting the most moles within a certain time). In most versions, striking a mole is worth ten points, and the winner is the first player to reach a score of 150 (15 moles). The winner receives a prize, typically a small stuffed animal, which can be traded up for a larger stuffed animal should the player win again. Game play options have become more adjustable, allowing the operator/owner to selectively alter the high score, hits points, rate of progressive speed, and game time. The game is still used for teaching auditory processing and attention. History Mogura Taiji was invented in 1975 by Kazuo Yamada of TOGO, based on ten of the designer's pencil sketches from 1974. TOGO released it as Mogura Taiji to Japanese amusement arcades in 1975. It became a major commercial success in Japan, where it was the second highest-grossing electro-mechanical arcade game of 1976 and again in 1977, second only to Namco's popular arcade racing game F-1 in both years. Mogura Taiji was licensed to Bandai in 1977. Bandai (now part of Bandai Namco Holdings) introduced the game to the Japanese home market as a toy in 1977, called Mogura Tataki (モグラたたき; "Mole Smash"); it was a major hit by 1978, selling over 1 million units. In the late 1970s, arcade centers in Japan were flooded with similar, derivative "mole buster" games. Mogura Taiji has since been commonly found at Japanese festivals. Mogura Taiji made its North American debut in November 1976 at the International Association of Amusement Parks and Attractions (IAAPA) show, where it drew attention for being the first mallet game of its type. Gerald Denton and Donny Anderson saw it and saw great potential for converting it into a carnival game by putting it in a trailer. Denton showed the game to Aaron Fechter and assigned him the task of building their own version of the game. Fechter coined the name "Whac-A-Mole" and added air cylinders "so that when air pushed up the moles, the air acted as a cushion". He developed the prototype in 1977, and Denton and Anderson presented it to the founder of Bob's Space Racers, Bob Cassata, that year. After Bob made further refinements, Bob's Space Racers began selling the game in 1977. In 1978 it debuted at a midway exhibition show, where it was the most popular game. The following year, it debuted at pinball parlours. In 1980, it was sold in the carnival, amusement park and coin-op arcade markets. Whac-A-Mole has since become a popular carnival game. Back in Japan, Namco, who were beginning to shift towards arcade video game production with hits like Galaxian (1979) and Pac-Man (1980), noticed arcade centers in Japan were flooded with "mole buster" games. To capitalize on their popularity, Namco began work on a similar game with a unique motif to help it stand out from similar games. Sweet Licks (1981) was originally designed by TOGO, who had originally named it Mole Attack. Namco purchased the rights to the game and gave it new artwork. Sweet Licks was designed by Yukio Ishikawa, a mechanical game designer for Namco. The game was themed around cake and pastries to help attract women. It was the first arcade game to use an LCD monitor to display the player's score. Sweet Licks became popular in Japan and was subsequently released in North America in April 1982, then in Europe, where it became popular in the 1980s. As of 2005[update], Bob's Space Racers generated $1,500,000,000 (equivalent to $2,500,000,000 in 2025) revenue from Whac-A-Mole machines in amusement parks. The same year, Hasbro secured licensing rights for Whac-A-Mole machines in the home consumer market. Variations The original Whac-A-Mole game inspired the first genre of games with a violent aspect as central to their user experience. Researchers have used Whac-A-Mole and its variations to study the violent aspects of these games. The Whac-A-Mole game trademark was originally owned by Bob's Space Racers but since 2008 has been owned by Mattel. Machines with similar gameplay are sold under other names. Whac-A-Mole has also been the basis for a number of internet games and mobile games that are similar in play and strategy. Engineer Tim Hunkin built and installed a "Whack a Banker" machine at Southwold Pier in England in 2009 made from parts of a previous "Whack a Warden" machine. Mattel Television currently[when?] is partnered with Fremantle to develop a game show inspired by the game, which has yet to debut. The show will be an elimination-style, unscripted series to determine the "Whac-a-Mole Champion". The competition will involve a life-size version of the game, as well as obstacle courses and other "surprising twist[s]". Design The moles are mounted on rods and raised by a lever and crank system. When the user strikes the mole, a microswitch is activated by a pin housed within the mole and the system lowers the mole. The timing of the moles was originally controlled by tones from an audio tape which then drove an air cylinder system. Colloquial usage The term "whac-a-mole" (or "whack-a-mole") is often used colloquially to refer to a situation characterized by a series of futile, Sisyphean tasks, where the successful completion of one just yields another popping up elsewhere. In computer programming/debugging it refers to the prospect of fixing a bug causing a new one to appear as a result. In an Internet context, it refers to the challenge of fending off recurring spammers, vandals, pop-up ads, malware, ransomware, and other distractions, annoyances, and harm. In law enforcement it refers to criminal activity popping up in another part of an area after increased enforcement in one district reduces it there. In a military context it refers to ostensibly inferior opposing troops continuing to appear after previous waves have been eliminated. It has also been applied to fake news, where as soon as one story is debunked another appears elsewhere – or sooner. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Chameleon] | [TOKENS: 5275]
Contents Chameleon Chameleons or chamaeleons (family Chamaeleonidae) are a distinctive and highly specialized clade of Old World lizards with 200 species described as of June 2015. The members of this family are best known for their distinct range of colours, being capable of colour-shifting camouflage. The large number of species in the family exhibit considerable variability in their capacity to change colour. For some, it is more of a shift of brightness (shades of brown); for others, a plethora of colour-combinations (reds, yellows, greens, blues) can be seen. Chameleons are also distinguished by their zygodactylous feet, their prehensile tail, their laterally compressed bodies, their head casques, their projectile tongues used for catching prey, their swaying gait, and in some species crests or horns on their brow and snout. Chameleons' eyes are independently mobile, and because of this the chameleon's brain is constantly analyzing two separate, individual images of its environment. When hunting prey, the eyes focus forward in coordination, affording stereoscopic vision. Chameleons are diurnal and adapted for visual hunting of invertebrates, mostly insects, although the large species also can catch small vertebrates. Chameleons typically are arboreal, but there are also many species that live on the ground. The arboreal species use their prehensile tail as an extra anchor point when they are moving or resting in trees or bushes; because of this, their tail is often referred to as a "fifth limb". Depending on species, they range from rainforest to desert conditions and from lowlands to highlands, with the vast majority occurring in Africa (about half of the species are restricted to Madagascar), but with a single species in southern Europe, and a few across southern Asia as far east as India and Sri Lanka. They have been introduced to Hawaii and Florida. Etymology The English word chameleon (/kəˈmiːliən/ kuh-MEEL-ee-un, /kəˈmil.jən/ kuh-MEEL-yuhn) is a simplified spelling of Latin chamaeleōn, a borrowing of the Greek χαμαιλέων (khamailéōn), a compound of χαμαί (khamaí) "on the ground" and λέων (léōn) "lion". Classification In 1986, the family Chamaeleonidae was divided into two subfamilies, Brookesiinae and Chamaeleoninae. Under this classification, Brookesiinae included the genera Brookesia and Rhampholeon, as well as the genera later split off from them (Palleon and Rieppeleon), while Chamaeleoninae included the genera Bradypodion, Calumma, Chamaeleo, Furcifer and Trioceros, as well as the genera later split off from them (Archaius, Nadzikambia and Kinyongia). Since that time, however, the validity of this subfamily designation has been the subject of much debate, although most phylogenetic studies support the notion that the pygmy chameleons of the subfamily Brookesiinae are not a monophyletic group. While some authorities have previously preferred to use this subfamilial classification on the basis of the absence of evidence principle, these authorities later abandoned this subfamilial division, no longer recognizing any subfamilies with the family Chamaeleonidae. In 2015, however, Glaw reworked the subfamilial division by placing only the genera Brookesia and Palleon within the Brookesiinae subfamily, with all other genera being placed in Chamaeleoninae. Change of colour Some chameleon species are able to change their skin coloration. Different chameleon species are able to vary their colouration and pattern through combinations of pink, blue, red, orange, green, black, brown, light blue, yellow, turquoise, and purple. Chameleon skin has a superficial layer which contains pigments, and under the layer are cells with very small (nanoscale) guanine crystals. Chameleons change colour by "actively tuning the photonic response of a lattice of small guanine nanocrystals in the s-iridophores". This tuning, by an unknown molecular mechanism, changes the wavelength of light reflected off the crystals, which changes the colour of the skin. The colour change was duplicated ex vivo by modifying the osmolarity of pieces of white skin. Colour change in chameleons has functions in camouflage, but most commonly in social signalling and reactions to temperature and other conditions. The relative importance of these functions varies with the circumstances, as well as the species. Colour change signals a chameleon's physiological condition and intentions to other chameleons. Because chameleons are ectothermic, another reason why they change colour is to regulate their body temperatures, either to a darker colour to absorb light and heat to raise their temperature, or to a lighter colour to reflect light and heat, thereby either stabilizing or lowering their body temperature. Chameleons tend to show brighter colours when displaying aggression to other chameleons, and darker colours when they submit or "give up". Most chameleon genera (exceptions are Chamaeleo, Rhampholeon and Rieppeleon) have blue fluorescence in a species-specific pattern in their skull tubercles, and in Brookesia there is also some in tubercles on the body. The fluorescence is derived from bones that only are covered in very thin skin and possibly serves a signaling role, especially in shaded habitats. Some species, such as Smith's dwarf chameleon and several others in the genus Bradypodion, adjust their colours for camouflage depending on the vision of the specific predator species (for example, bird or snake) by which they are being threatened. In the introduced Hawaiian population of Jackson's chameleon, conspicuous colour changes that are used for communication between chameleons have increased, whereas anti-predator camouflage colour changes have decreased relative to the native source population in Kenya, where there are more predators. Chameleons have two superimposed layers within their skin that control their colour and thermoregulation. The top layer contains a lattice of guanine nanocrystals, and by exciting this lattice the spacing between the nanocrystals can be manipulated, which in turn affects which wavelengths of light are reflected and which are absorbed. Exciting the lattice increases the distance between the nanocrystals, and the skin reflects longer wavelengths of light. Thus, in a relaxed state the crystals reflect blue and green, but in an excited state the longer wavelengths such as yellow, orange, green, and red are reflected. The skin of a chameleon also contains some yellow pigments, which combined with the blue reflected by a relaxed crystal lattice results in the characteristic green colour, which is common for many chameleons in their relaxed state. Chameleon colour palettes have evolved through evolution and the environment. Chameleons living in the forest have a more defined and colourful palette compared to those living in the desert or savanna, which have more of a basic, brown, and charred palette. Evolution The oldest described chameleon is Anqingosaurus brevicephalus from the Middle Paleocene (about 58.7–61.7 mya) of China. Other chameleon fossils include Chamaeleo caroliquarti from the Lower Miocene (about 13–23 mya) of the Czech Republic and Germany, and Chamaeleo intermedius from the Upper Miocene (about 5–13 mya) of Kenya. The chameleons are probably far older than that, perhaps sharing a common ancestor with iguanids and agamids more than 100 mya (agamids being more closely related). Since fossils have been found in Africa, Europe, and Asia, chameleons were certainly once more widespread than they are today. Although nearly half of all chameleon species today live in Madagascar, this offers no basis for speculation that chameleons might originate from there. In fact, it has recently been shown that chameleons most likely originated in mainland Africa. It appears there were two distinct oceanic migrations from the mainland to Madagascar. The diverse speciation of chameleons has been theorized to have directly reflected the increase in open habitats (savannah, grassland, and heathland) that accompanied the Oligocene period. Monophyly of the family is supported by several studies. Daza et al. (2016) described a small (10.6 mm in snout-vent length), probably neonatal lizard preserved in the Cretaceous (Albian-Cenomanian boundary) amber from Myanmar. The authors noted that the lizard has "short and wide skull, large orbits, elongated and robust lingual process, frontal with parallel margins, incipient prefrontal boss, reduced vomers, absent retroarticular process, low presacral vertebral count (between 15 and 17) and extremely short, curled tail"; the authors considered these traits to be indicative of the lizard's affiliation with Chamaeleonidae. The phylogenetic analysis conducted by the authors indicated that the lizard was a stem-chamaeleonid. However, Matsumoto & Evans (2018) reinterpreted this specimen as an albanerpetontid amphibian. This specimen was given the name Yaksha perettii in 2020, and was noted to have several convergently chameleon-like features, including adaptations for ballistic feeding. While the exact evolutionary history of colour change in chameleons is still unknown, there is one aspect of the evolutionary history of chameleon colour change that has already been conclusively studied: the effects of signal efficacy. Signal efficacy, or how well the signal can be seen against its background, has been shown to correlate directly to the spectral qualities of chameleon displays. Dwarf chameleons, the chameleon of study, occupy a wide variety of habitats from forests to grasslands to shrubbery. It was demonstrated that chameleons in brighter areas tended to present brighter signals, but chameleons in darker areas tended to present relatively more contrasting signals to their backgrounds. This finding suggests that signal efficacy (and thus habitat) has affected the evolution of chameleon signaling. Stuart-Fox et al. note that it makes sense that selection for crypsis is not seen to be as important as selection for signal efficacy, because the signals are only shown briefly; chameleons are almost always muted cryptic colours. Description Chameleons vary greatly in size and body structure, with maximum total lengths varying from 22 mm (0.87 in) in male Brookesia nana (one of the world's smallest reptiles) to 68.5 cm (27.0 in) in the male Furcifer oustaleti. Many have head or facial ornamentation, such as nasal protrusions, or horn-like projections in the case of Trioceros jacksonii, or large crests on top of their heads, like Chamaeleo calyptratus. Many species are sexually dimorphic, and males are typically much more ornamented than the female chameleons. Typical sizes of species of chameleon commonly kept in captivity or as pets are: The feet of chameleons are highly adapted to arboreal locomotion, and species such as Chamaeleo namaquensis that have secondarily adopted a terrestrial habit have retained the same foot morphology with little modification. On each foot, the five distinguished toes are grouped into two fascicles. The toes in each fascicle are bound into a flattened group of either two or three, giving each foot a tongs-like appearance. On the front feet, the outer, lateral, group contains two toes, whereas the inner, medial, group contains three. On the rear feet, this arrangement is reversed, the medial group containing two toes, and the lateral group three. These specialized feet allow chameleons to grip tightly onto narrow or rough branches. Furthermore, each toe is equipped with a sharp claw to afford a grip on surfaces such as bark when climbing. It is common to refer to the feet of chameleons as didactyl or zygodactyl, though neither term is fully satisfactory, both being used in describing different feet, such as the zygodactyl feet of parrots or didactyl feet of sloths or ostriches, none of which is significantly like chameleon feet. Although "zygodactyl" is reasonably descriptive of chameleon foot anatomy, their foot structure does not resemble that of parrots, to which the term was first applied. As for didactyly, chameleons visibly have five toes on each foot, not two. Some chameleons have a crest of small spikes extending along the spine from the proximal part of the tail to the neck; both the extent and size of the spikes vary between species and individuals. These spikes help break up the definitive outline of the chameleon, which aids it when trying to blend into a background. Chameleon upper and lower eyelids are joined, with only a pinhole large enough for the pupil to see through. Each eye can pivot and focus independently, allowing the chameleon to observe two different objects simultaneously. This gives them a full 360-degree arc of vision around their bodies. Prey is located using monocular depth perception, not stereopsis. Chameleons have the highest magnification (per size) of any vertebrate, with the highest density of cones in the retina. Like snakes, chameleons do not have an outer or a middle ear, so there is neither an ear-opening nor an eardrum. However, chameleons are not deaf: they can detect sound frequencies in the range of 200–600 Hz. Chameleons can see in both visible and ultraviolet light. Chameleons exposed to ultraviolet light show increased social behavior and activity levels, are more inclined to bask, feed, and reproduce as it has a positive effect on the pineal gland. All chameleons are primarily insectivores that feed by ballistically projecting their long tongues from their mouths to capture prey located some distance away. While the chameleons' tongues are typically thought to be one and a half to two times the length of their bodies (their length excluding the tail), smaller chameleons (both smaller species and smaller individuals of the same species) have recently been found to have proportionately larger tongue apparatuses than their larger counterparts. Thus, smaller chameleons are able to project their tongues greater distances than the larger chameleons that are the subject of most studies and tongue length estimates, and can project their tongues more than twice their body length. The tongue apparatus consists of highly modified hyoid bones, tongue muscles, and collagenous elements. The hyoid bone has an elongated, parallel-sided projection, called the entoglossal process, over which a tubular muscle, the accelerator muscle, sits. The accelerator muscle contracts around the entoglossal process and is responsible for creating the work to power tongue projection, both directly and through the loading of collagenous elements located between the entoglossal process and the accelerator muscle. The tongue retractor muscle, the hyoglossus, connects the hyoid and accelerator muscle, and is responsible for drawing the tongue back into the mouth following tongue projection. Tongue projection occurs at extremely high performance, reaching the prey in as little as 0.07 seconds, having been launched at accelerations exceeding 41 g. The power with which the tongue is launched, known to exceed 3000 W kg−1, exceeds that which muscle is able to produce, indicating the presence of an elastic power amplifier to power tongue projection. The recoil of elastic elements in the tongue apparatus is thus responsible for large percentages of the overall tongue projection performance. One consequence of the incorporation of an elastic recoil mechanism to the tongue projection mechanism is relative thermal insensitivity of tongue projection relative to tongue retraction, which is powered by muscle contraction alone, and is heavily thermally sensitive. While other ectothermic animals become sluggish as their body temperatures decline, due to a reduction in the contractile velocity of their muscles, chameleons are able to project their tongues at high performance even at low body temperatures. The thermal sensitivity of tongue retraction in chameleons, however, is not a problem, as chameleons have a very effective mechanism of holding onto their prey once the tongue has come into contact with it, including surface phenomena, such as wet adhesion and interlocking, and suction. The thermal insensitivity of tongue projection thus enables chameleons to feed effectively on cold mornings prior to being able to behaviorally elevate their body temperatures through thermoregulation, when other sympatric lizards species are still inactive, likely temporarily expanding their thermal niche as a result. Certain species of chameleons have bones that glow when under ultraviolet light, also known as biogenic fluorescence. Some 31 different species of Calumma chameleons, all native to Madagascar, displayed this fluorescence in CT scans. The bones emitted a bright blue glow and could even shine through the chameleon's four layers of skin. The face was found to have a different glow, appearing as dots otherwise known as tubercles on facial bones. The glow results from proteins, pigments, chitin, and other materials that make up a chameleon's skeleton, possibly giving chameleons a secondary signaling system that does not interfere with their colour-changing ability, and may have evolved from sexual selection. Distribution and habitat Chameleons primarily live in the mainland of sub-Saharan Africa and on the island of Madagascar, although a few species live in northern Africa, southern Europe (Portugal, Spain, Italy, Greece, Cyprus and Malta), the Middle East, southeast Pakistan, India, Sri Lanka, and several smaller islands in the western Indian Ocean. Introduced, non-native populations are found in Hawaii and Florida. Chameleons are found only in tropical and subtropical regions and inhabit all kinds of lowland and mountain forests, woodlands, shrublands, savannas, and sometimes deserts, but each species tends to be a restricted to only one of a few different habitat types. The typical chameleons from the subfamily Chamaeleoninae are arboreal, usually living in trees or bushes, although a few (notably the Namaqua chameleon) are partially or largely terrestrial. The genus Brookesia, which comprises the majority of the species in the subfamily Brookesiinae, live low in vegetation or on the ground among leaf litter. Many chameleon species have small distributions and are considered threatened. Declining chameleon numbers are mostly due to habitat loss. Reproduction Most chameleons are oviparous, but all Bradypodion species and many Trioceros species are ovoviviparous (although some biologists prefer to avoid the term ovoviviparous because of inconsistencies with its use in some animal groups, instead just using viviparous). The oviparous species lay eggs three to six weeks after copulation. The female will dig a hole—from 10–30 cm (4–12 in), deep depending on the species—and deposit her eggs. Clutch sizes vary greatly with species. Small Brookesia species may only lay two to four eggs, while large veiled chameleons (Chamaeleo calyptratus) have been known to lay clutches of 20–200 (veiled chameleons) and 10–40 (panther chameleons) eggs. Clutch sizes can also vary greatly among the same species. Eggs generally hatch after four to 12 months, again depending on the species. The eggs of Parson's chameleon (Calumma parsoni) typically take 400 to 660 days to hatch. Chameleons lay flexible-shelled eggs which are affected by environmental characteristics during incubation. The egg mass is the most important in differentiating survivors of Chameleon during incubation. An increase in egg mass will depend on temperature and water potential. To understand the dynamics of water potential in Chameleon eggs, the consideration of exerted pressure on eggshells will be essential because the pressure of eggshells play an important role in the water relation of eggs during entire incubation period. The ovoviviparous species, such as the Jackson's chameleon (Trioceros jacksonii) have a five- to seven-month gestation period. Each young chameleon is born within the sticky transparent membrane of its yolk sac. The mother presses each egg onto a branch, where it sticks. The membrane bursts and the newly hatched chameleon frees itself and climbs away to hunt for itself and hide from predators. The female can have up to 30 live young from one gestation. Diet Chameleons generally eat insects, but larger species, such as the common chameleon, may also take other lizards and young birds.: 5 The range of diets can be seen from the following examples: Anti-predator adaptations Chameleons are preyed upon by a variety of other animals. Birds and snakes are the most important predators of adult chameleons. Invertebrates, especially ants, put a high predation pressure on chameleon eggs and juveniles. Chameleons are unlikely to be able to flee from predators and rely on crypsis as their primary defense. Chameleons can change both their colours and their patterns (to varying extents) to resemble their surroundings or disrupt the body outline and remain hidden from a potential enemy's sight. Only when detected do chameleons actively defend themselves. They adopt a defensive body posture, present an attacker with a laterally flattened body to appear larger, warn with an open mouth, and, if needed, utilize feet and jaws to fight back. Vocalization is sometimes incorporated into threat displays. Parasites Chameleons are parasitized by nematode worms, including threadworms (Filarioidea). Threadworms can be transmitted by biting insects such as ticks and mosquitoes. Other roundworms are transmitted through food contaminated with roundworm eggs; the larvae burrow through the wall of the intestine into the bloodstream. Chameleons are subject to several protozoan parasites, such as Plasmodium, which causes malaria, Trypanosoma, which causes sleeping sickness, and Leishmania, which causes leishmaniasis. Chameleons are subject to parasitism by coccidia, including species of the genera Choleoeimeria, Eimeria, and Isospora. As pets Chameleons are popular reptile pets, mostly imported from African countries like Madagascar, Tanzania, and Togo. The most common in the trade are the Senegal chameleon (Chamaeleo senegalensis), the Yemen or veiled chameleon (Chamaeleo calyptratus), the panther chameleon (Furcifer pardalis), and Jackson's chameleon (Trioceros jacksonii). Other chameleons seen in captivity (albeit on an irregular basis) include such species as the carpet chameleon (Furcifer lateralis), Meller's chameleon (Trioceros melleri), Parson's chameleon (Calumma parsonii), and several species of pygmy and leaf-tailed chameleons, mostly of the genera Brookesia, Rhampholeon, or Rieppeleon. These are among the most sensitive reptiles one can own, requiring specialized attention and care. The U.S. has been the main importer of chameleons since the early 1980s accounting for 69% of African reptile exports. However, there have been large declines due to tougher regulations to protect species from being taken from the wild and due to many becoming invasive in places like Florida. They have remained popular though which may be due to the captive-breeding in the U.S. which has increased to the point that the U.S. can fulfill its demand, and has now even become a major exporter as well. In the U.S. they are so popular, that despite Florida having six invasive chameleon species due to the pet trade, reptile hobbyists in these areas search for chameleons to keep as pets or to breed and sell them, with some selling for up to a thousand dollars. Historical understandings Aristotle (4th century BC) describes chameleons in his History of Animals. Pliny the Elder (1st century AD) also discusses chameleons in his Natural History, noting their ability to change colour for camouflage. The chameleon was featured in Conrad Gessner's Historia animalium (1563), copied from De aquatilibus (1553) by Pierre Belon. In Shakespeare's Hamlet, the eponymous Prince says "Excellent, i' faith, of the chameleon's dish. I eat the air, promise-crammed." This refers to the Elizabethan belief that chameleons lived on nothing but the air. References General bibliography Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Square_Enix] | [TOKENS: 7422]
Contents Square Enix Square Enix Holdings Co., Ltd.[b] is a Japanese multinational holding company, video game publisher and entertainment conglomerate. It releases role-playing game franchises, such as Final Fantasy, Dragon Quest, and Kingdom Hearts, among numerous others. Outside of video game publishing and development, it is also in the business of merchandise, arcade facilities, and manga publication under its Gangan Comics brand. The original Square Enix Co., Ltd. was formed in April 2003 from a merger between Square and Enix, with the latter as the surviving company. Each share of Square's common stock was exchanged for 0.85 shares of Enix's common stock. At the time, 80% of Square Enix staff were made up of former Square employees. As part of the merger, former Square president Yoichi Wada was appointed the president of the new corporation, while former Enix president Keiji Honda was named vice president. Yasuhiro Fukushima, the largest shareholder of the combined corporation and founder of Enix, became chairman. In October 2008, Square Enix conducted a company split between its corporate business and video game operations, reorganizing itself as the holding company Square Enix Holdings Co., Ltd., while its internally domestic video game operations were formed under the subsidiary Square Enix Co., Ltd. The group operates American, Chinese and European branches, based in Los Angeles, Beijing, Paris, Hamburg, and London respectively. Several of Square Enix's franchises have sold over 10 million copies worldwide after 2020, with Final Fantasy selling 173 million, Dragon Quest selling 85 million, and Kingdom Hearts shipping 36 million. In 2005, Square Enix acquired arcade corporation Taito. In 2009, Square Enix acquired Eidos plc, the parent company of British game publisher Eidos Interactive, which was then absorbed into its European branch. Square Enix is headquartered at the Shinjuku Eastside Square Building in Shinjuku, Tokyo, along with a second office at Osaka. It has over 5,000 employees worldwide through its base operations and subsidiaries. Corporate history Enix was founded on September 22, 1975, as Eidansha Boshu Service Center by Japanese architect-turned-entrepreneur Yasuhiro Fukushima. Enix focused on publishing games, often by companies who exclusively partnered with the company. In the 1980s, in a partnership with developers Chunsoft, the company began publishing the Dragon Quest series of console games. Key members of the developer's staff consisted of director Koichi Nakamura, writer Yuji Horii, artist Akira Toriyama, and composer Koichi Sugiyama, among others. The first game, Dragon Warrior, in the Famicom-based RPG series, was released in 1986 and would eventually sell 1.5 million copies in Japan, establishing Dragon Quest as the company's most profitable franchise. Despite the announcement that Enix's long-time competitor Square would develop exclusively for PlayStation, Enix announced in January 1997 that it would release games for both Nintendo and Sony consoles. This caused a significant rise in stock for both Enix and Sony. By November 1999, Enix was listed in the Tokyo Stock Exchange's first section, indicating it as a "large company". Square was started in October 1983 by Masafumi Miyamoto as a computer game software division of Den-Yu-Sha, a power line construction company owned by his father. While at the time, game development was usually conducted by only one programmer, Miyamoto believed that it would be more efficient to have graphic designers, programmers and professional story writers working together. In September 1986, the division was spun off into an independent company led by Miyamoto, officially named Square Co., Ltd. After releasing several unsuccessful games for the Famicom, Square relocated to Ueno, Tokyo in 1987 and developed Final Fantasy, a role-playing video game inspired by Enix's success in the genre with the 1986 Dragon Quest. Final Fantasy was a success with over 400,000 copies sold, and it became Square's leading franchise, spawning dozens of games in a series that continues to the present. Buoyed by the success of their Final Fantasy franchise, Square developed notable games and franchises such as Chrono, Mana, Kingdom Hearts (in collaboration with The Walt Disney Company), and Super Mario RPG (under the guidance of Super Mario creator Shigeru Miyamoto). By late 1994 they had developed a reputation as a producer of high-quality role-playing video games. Square was one of the many companies that had planned to develop and publish their games for the Nintendo 64, but with the cheaper costs associated with developing games on CD-based consoles such as the Sega Saturn and the Sony PlayStation, Square decided to develop titles for the latter system. Final Fantasy VII was one of these games, and it sold 9.8 million copies, making it the second-best-selling game for the PlayStation. A merger between Square and Enix was considered since at least 2000; the financial failure in 2001 of Square's first movie, Final Fantasy: The Spirits Within, made Enix reluctant to proceed while Square was losing money. With the company facing its second year of financial losses, Square approached Sony for a capital injection, and on October 8, 2001, Sony purchased an 18.6% stake in Square. Following the success of both Final Fantasy X and Kingdom Hearts, the company's finances stabilized, and it recorded the highest operating margin in its history in the fiscal year 2002. It was announced on November 25, 2002, that Square and Enix's previous plans to merge were to officially proceed, intending to decrease development costs and to compete with foreign developers. As described by Square's president and CEO Yoichi Wada: "Square has also fully recovered, meaning this merger is occurring at a time when both companies are at their height." Some shareholders expressed concerns about the merger, notably Miyamoto (the founder and largest shareholder of Square), who would find himself holding a significantly smaller percentage of the combined companies. Other criticism came from Takashi Oya of Deutsche Securities, who expressed doubts about the benefits of such a merger: "Enix outsources game development and has few in-house creators, while Square does everything by itself. The combination of the two provides no negative factors but would bring little in the way of operational synergies." Miyamoto's concerns were eventually resolved by altering the exchange ratio of the merger so that each Square share would be exchanged for 0.85 Enix shares rather than 0.81 shares, and the merger was greenlit. The merger was set for April 1, 2003, on which date the newly merged entity Square Enix came into being. At the time of the merger, 80% of Square Enix staff were made up of former Square employees. As part of the merger, former Square president Yoichi Wada was appointed the president of the new corporation, while former Enix president Keiji Honda became its vice president. The founder of Enix and the largest shareholder of the newly combined corporation, Yasuhiro Fukushima, was made its honorary chairman. As a result of the merger, Enix was the surviving company and Square Co., Ltd. was dissolved. In July of that year, the Square Enix headquarters were moved to Yoyogi, Shibuya, Tokyo, to help combine the two companies. To strengthen its wireless market, Square Enix acquired mobile application developer UIEvolution in March 2004, which was sold in December 2007, and the company instead founded its own Square Enix MobileStudio in January 2008 to focus on mobile products. In January 2005, Square Enix founded Square Enix China, expanding their interests in the People's Republic of China. In September 2005, Square Enix bought the gaming developer and publisher Taito, renowned for their arcade hits such as Space Invaders and the Bubble Bobble series; Taito's home and portable console games divisions were merged into Square Enix itself by March 2010. In August 2008, Square Enix made plans for a similar expansion by way of a friendly takeover of video game developer Tecmo by purchasing shares at a 30 percent premium, but Tecmo rejected the proposed takeover. Tecmo would later merge with Koei in April 2009 to form Koei Tecmo. In April 2007, Square Enix Ltd. CEO John Yamamoto also became CEO of Square Enix, Inc. In 2008–2009, Square Enix was reportedly working with Grin on a Final Fantasy spin-off codenamed Fortress. The project was allegedly canceled by Square Enix after introducing seemingly impossible milestones and without payments made, resulting in Grin declaring bankruptcy and its co-founders blaming Square Enix for being "betrayed". On January 8, 2009, Square Enix signed an agreement with Ubisoft where the former would work to assist the latter in distributing their video games in Japan. In February 2009, Square Enix announced a takeover deal for Eidos (formerly SCi Entertainment), the holding company for Eidos Interactive. The UK-based publisher's assets include Tomb Raider, Hitman, Deus Ex, Thief, and Legacy of Kain franchises, along with subsidiary development studios Crystal Dynamics, Eidos-Montréal and IO Interactive that developed the games. The acquisition of Eidos was completed in April 2009, and in November it was merged with Square Enix's European publishing organization, business unit Square Enix Europe. Eidos' US operations were merged with Square Enix Incorporated. In April 2010, a new Japanese label for Western games bearing CERO restrictions called Square Enix Extreme Edges was announced. In July 2010, Mike Fischer was appointed CEO of Square Enix, Inc. Square Enix founded the mobile development studio Hippos Lab in March 2011 and Square Enix Montréal in 2012. In May 2011, VG24/7 reported Stainless Games had purchased the rights to Carmageddon from Square Enix. In July 2011, it was reported that Square Enix closed their Los Angeles Studio. In January 2012, Square Enix North American office could pursue smaller niche, mobile and social media games due to its existing revenue streams. In October 2012, Square Enix was perceived as a "force in mobile" games by Kotaku. The price of Final Fantasy Dimensions and Demons' Score, $30 and $44 respectively, was criticized. On March 26, 2013, citing sluggish sales of major Western games, Square Enix announced major restructuring, expected loss of ¥10 billion and resignation of President Yoichi Wada, whom Yosuke Matsuda replaced. Phil Rogers was elected as a new Director, among others. With the restructuring, Square Enix of America CEO Mike Fischer left the company in May, with former Square Enix Europe CEO Phil Rogers becoming CEO of Americas and Europe. Further executive changes at Square Enix Western studios were mentioned in a statement. Square Enix Europe was hit with layoffs and Life President Ian Livingstone departed from the company in September 2013. It said with the fiscal year report in March 2013, sales of Tomb Raider (2013) and Hitman: Absolution were weak, despite critical acclaim. The North American sales force was said to be ineffective and price pressure was intense. Matsuda noted the long development time of their important games and said they need to shift to a business model with frequent customer interactions, noting Kickstarter as an example. In March 2013, Square Enix India opened in Mumbai; however the office was closed in April 2014 and reopened five years later. As well as Square Enix Latin America in Mexico, which was closed in 2015. A mobile studio called Smileworks was founded in Indonesia in June 2013; however it was closed in January 2015. In 2014, Square Enix Collective launched, an indie developer service provider headed by Phil Elliot. Also in 2014, Square Enix signed a strategic alliance and cooperation with Japanese and French video game companies, Bandai Namco Entertainment and Ubisoft; it has served as the Japanese publisher of video games and crossover productions since 2009.[citation needed] In March 2014, following the success of Bravely Default, Square Enix said it will "go back to their roots" and focus on creating content that will appeal to their core audience. Karl Stewart, vice president of strategic marketing at Square Enix for North America and Europe, left the company that month. In 2015, Square created a new studio known as Tokyo RPG Factory to develop what was then dubbed Project Setsuna. Around 2015, Square Enix's Western divisions began "officially working across LA and London". In January 2017, Norwegian studio Artplant purchased former Eidos franchise Project I.G.I. On February 21, 2017, the formation of a new studio Studio Istolia was announced. The studio, headed by Hideo Baba, would be working on the new RPG Project Prelude Rune. In November 2017, IO Interactive conducted a management buyout from Square Enix and the Hitman IP was transferred to the studio. In September 2018, COO Mike Sherlock died, with Square Enix's executive team assuming his immediate responsibilities. In 2018, Square Enix branded their third party publishing division Square Enix External Studios, which is headed by Jon Brooke and Lee Singleton. John Heinecke was appointed as CMO for Americas and Europe in October 2018. Baba departed the studio in early 2019, and shortly after this, Studio Istolia was closed, and Project Prelude Rune cancelled following an assessment of the project, with its staff being reassigned to different projects within the company. In 2019, Square Enix opened an Indian office again, now in Bangalore, which expanded into publishing mobile games for the Indian market in 2021. In June 2020, Square Enix donated $2.4 million to charities around their Western studios and offices for the Black Lives Matter cause and COVID-19, which were partially raised from sales of its discounted Square Enix Eidos Anthology bundle. In March 2021, Forever Entertainment, a Polish studio, was reported to be working to bring several of Square Enix's properties to modern systems. A new mobile studio called Square Enix London Mobile, working on Tomb Raider Reloaded and an unannounced title based on Avatar: The Last Airbender with Navigator Games, was announced on October 20, 2021. In March 2022, Square Enix announced that they would donate $500,000 to the United Nations fund for Ukrainian refugees in the Russian invasion of Ukraine. On May 1, 2022, Square Enix announced that it would sell several assets of subsidiary Square Enix Limited to Swedish games holding company Embracer Group for $300 million. This included studios Crystal Dynamics, Eidos-Montréal, and Square Enix Montreal, IPs Deus Ex, Legacy of Kain, Thief, and Tomb Raider and rights to "over 50 games". Square Enix stated that the sale will further help it in investment into blockchain and other technologies, and to "assist the company in adapting to the changes underway in the global business environment by establishing a more efficient allocation of resources". Square Enix also stated that it would retain the Life Is Strange, Outriders, and Just Cause franchises. However, during the Japanese publisher's full-year financial results briefing on May 13, president Yosuke Matsuda clarified the past statement and said the money from the sale will be used to strengthen the company's core games business. On July 25, 2022, Square Enix launched the English version of Manga Up!. The acquisition was closed by August 26, 2022, with the assets being held under CDE Entertainment which is headed from London by Phil Rogers, former CEO of Square Enix Americas and Europe. In the company's financial statement for the following quarter, released in September 2022, Matsuda said they were moving away from outright owning studios due to rising costs of development, but were looking at means to invest in studios such as joint ventures or investment opportunities. In 2022, Square Enix invested in seven business strategic cooperations in the blockchain and cloud services such as Zebedee (United States), Blocklords (Estonia), Cross The Ages (France), Blacknut (France), Animoca Brands-owned The Sandbox (Australia and Hong Kong), and Ubitus (Japan). On February 28, 2023, Square Enix Holdings announced that on May 1, Luminous Productions would reorganize and merge with Square Enix internally, citing the merging of the two would "enhance the group's abilities to develop HD games" for the 20th anniversary. On March 3, Square Enix issued a statement announcing a proposed change to the position of its president and representative director that, if implemented, would result in Yosuke Matsuda stepping down and being succeeded by Takashi Kiryu, who is presently the company's director. The change will become effective upon approval at the company's 43rd annual shareholders' meeting, which is planned for June 2023, and the board meeting which will follow ahead on the 20th anniversary of the merger. Kiryu succeeded on May 18 and was seen as part of the Final Fantasy XVI launch event as one of his first appearances in public. In March 2024, Square Enix announced it would be more selective with the games it develops, resulting in numerous unannounced titles being cancelled. The company lost ¥22.1 billion (approximately $140 million) due to "content abandonment", they are now making the third installment of the Final Fantasy VII Remake their full focus after the release of Rebirth. Corporate structure On October 1, 2008, Square Enix transformed into a holding company and was renamed Square Enix Holdings. At the same time, the development and publishing businesses were transferred to a spin-off company named Square Enix, sharing the same corporate leadership and offices with the holding company. The primary offices for Square Enix and Square Enix Holdings are in the Shinjuku Eastside Square Building in Shinjuku, Tokyo. Currently, focusing in different industries, the company is divided as the following: Five Creative Business Units for game development and production in Square Enix Co., Ltd; a dedicated publishing business unit for manga and books publishing; a digital storefront business division for their e-Store and merchandise production; their media and arts business unit for music production, concert and live performance coordination, and visual contents production (live action, animation, and CG for TV, movies, and games); and a blockchain business division. After the merger in 2003, Square Enix's development department was organized into eight Square and two Enix Product Development Divisions (開発事業部, kaihatsu jigyōbu), each focused on different groupings of games. The divisions were spread around different offices; for example, Product Development Division 5 had offices both in Osaka and Tokyo. According to Yoichi Wada, the development department was reorganized away from the Product Development Division System by March 2007 into a project-based system. Until 2013, the teams in charge of the Final Fantasy and Kingdom Hearts series were still collectively referred to as the 1st Production Department (第1制作部, dai-ichi seisakubu). The 1st Production Department was formed from the fall 2010 combination of Square Enix's Tokyo and Osaka development studios, with Shinji Hashimoto as its corporate executive. In December 2013, Square Enix's development was restructured into 12 Business Divisions. In 2017, Business Division 9 was merged into Business Division 8, while Business Divisions 11 and 12 merged to become the new Business Division 9, while a new Business Division 11 was created with some staff from Business Division 6. In 2019, Square Enix announced that their eleven Business Divisions would be consolidated into four units by 2020 with a new title, Creative Business Unit. Naoki Yoshida, who was previously the head of Business Division 5, became the head of Creative Business Unit III. The current structure for the development and production division called Creative Business Unit is as follows: In those five divisions, most of the development is done outside of Square Enix under contracted development companies, while Creative Business Unit produces and oversees the title done by those developers. All of the internal development done by Creative Business Units are for titles such as mainline Dragon Quest, Final Fantasy and Kingdom Hearts, while their mid-size and smaller titles have the development outsourced to other companies for most of the cases such as "Team Asano" led by Tomoya Asano, a team of producers from Creative Business II who had Artdink and Netchubiyori developing Triangle Strategy or Historia developing the remake of Live A Live, while the team was mainly present to oversee, produce, concept, while the studios do the bulk of the project under their direction. In April 2024, Square Enix restructured its development operations moving away from the Creative Business Unit structure into a new Creative Studio structure. The business model of post-merger Square Enix is centered on the idea of "polymorphic content", which consists of developing franchises on multiple potential media rather than being restricted by a single gaming platform. An early example of this strategy is Enix's Fullmetal Alchemist manga series, which has been adapted into two anime television series, five movies (two animated, three live-action), and several novels and video games. Other polymorphic projects include the Compilation of Final Fantasy VII, Code Age, World of Mana, Ivalice Alliance, and Fabula Nova Crystallis Final Fantasy subseries. According to Yoichi Wada, "It's very difficult to hit the jackpot, as it were. Once we've hit it, we have to get all the juice possible out of it". Similar to Sony's Greatest Hits program, Square Enix also re-releases their best-selling games at a reduced price under a label designated "Ultimate Hits". The standard game design model Square Enix employs is to establish the plot, characters, and art of the game first. Battle systems, field maps, and cutscenes are created next. According to Taku Murata, this process became the company's model for development after the success of Square's Final Fantasy VII in 1997. The team size for Final Fantasy XIII peaked at 180 artists, 30 programmers, and 36 game designers, but analysis and restructuring were done to outsource large-scale development in the future. Business Square Enix's primary concentration is on video gaming, and it is primarily known for its role-playing video game franchises. Of its properties, the Final Fantasy franchise, begun in 1987, is the best-selling, with worldwide sales exceeding 173 million units as of March 2022. The Dragon Quest franchise, whose first title was introduced in 1986, is also a best-seller; it is considered one of the most popular game series in Japan and has sold over 85 million units globally. More recently, the Kingdom Hearts series (developed in collaboration with Disney beginning in 2002) has become popular, with 36 million units shipped as of March 2022. Other popular series developed by Square Enix include the SaGa series with nearly 10 million copies sold since 1989, the Mana series with over 6 million sales since 1991, and the Chrono series with over 5 million sold since 1995. In addition to their sales numbers, many Square Enix games have been highly reviewed; 27 Square Enix games were included in Famitsu magazine's 2006 "Top 100 Games Ever", with 7 in the top 10 and Final Fantasy X claiming the number 1 position. The company also won IGN's award for Best Developer of 2006 for the PlayStation 2. Square and Enix initially targeted Nintendo home consoles with their games, but Square Enix currently develops games for a wide variety of systems. In the seventh generation of video game consoles, Square Enix released new installments from its major series across all three major systems, including Final Fantasy XIII on both the PlayStation 3 and Xbox 360 and Dragon Quest X on the Wii. Square Enix has also developed titles for handheld game consoles, including the Game Boy Advance, Nintendo DS, PlayStation Portable, Nintendo 3DS, and PlayStation Vita. Also, they have published games for Microsoft Windows-based personal computers and various models of mobile phones and modern smartphones. Square Enix mobile phone games became available in 2004 on the Vodafone network in some European countries, including Germany, the United Kingdom, Spain, and Italy. Before its launch, Michihiro Sasaki, senior vice president of Square Enix, spoke about the PlayStation 3, saying, "We don't want the PlayStation 3 to be the overwhelming loser, so we want to support them, but we don't want them to be the overwhelming winner either, so we can't support them too much." Square Enix continued to reiterate their devotion to multi-platform publishing in 2007, promising more support for the North American and European gaming markets where console pluralism is generally more prevalent than in Japan. Their interest in multi-platform development was made evident in 2008 when the previously PlayStation 3-exclusive game Final Fantasy XIII was announced for release on the Xbox 360. In 2008, Square Enix released their first game for the iPod, Song Summoner: The Unsung Heroes. Square Enix made a new brand for younger children gaming that same year, known as Pure Dreams. Pure Dreams' first two games, Snoopy DS: Let's Go Meet Snoopy and His Friends! and Pingu's Wonderful Carnival, were released that year. After acquiring Eidos in 2009, Square Enix combined it with its European publishing wing to create Square Enix Europe, which continues to publish Eidos franchises such as Tomb Raider (88 million sales), Deus Ex (4 million), Thief and Legacy of Kain (3.5 million). Square Enix were served as the Japanese publisher for Activision Blizzard and Ubisoft from 2009 to 2024 (followed by the Microsoft acquisition and Ubisoft's partnership with Tencent and Sega Sammy Holdings). In May 2022, Square Enix sold several assets of Square Enix Europe $300 million to Embracer Group, including former Eidos Interactive franchises such as Tomb Raider, Deus Ex, Thief, Legacy of Kain and more than 50 others. Square Enix owned franchises and games include: In 2004, Square Enix began to work on a "common 3D format" that would allow the entire company to develop titles without being restricted to a specific platform: this led to the creation of a game engine named Crystal Tools, which is compatible with the PlayStation 3, the Xbox 360, Windows-based PCs, and to some extent the Wii. It was first shown off at a tech demo shown off at E3 2005 and was later used for Final Fantasy XIII based on the demo's reception. Crystal Tools was also used for Final Fantasy Versus XIII before its re-branding as Final Fantasy XV and its shift onto next-gen platforms. Refinement of the engine continued through the development of Final Fantasy XIII-2, and it underwent a major overhaul for Lightning Returns: Final Fantasy XIII. Since that release, no new titles have been announced using Crystal Tools, and it is believed that the development of the engine has halted permanently. Luminous Engine was originally intended for eighth-generation consoles and unveiled at E3 2012 through a tech demo titled Agni's Philosophy. The first major console title to be developed with Luminous Engine was Final Fantasy XV; the engine's development was done in tandem with the game, and the game's development helped the programming team optimize the engine. In addition, Square Enix uses commercial engines, including Epic Games' Unreal Engine 3 (such as The Last Remnant), Unreal Engine 4 (such as Dragon Quest XI, Kingdom Hearts III, Final Fantasy VII Remake) and Unity (such as I Am Setsuna, Lost Sphear, SaGa: Scarlet Grace). Before the merger, Enix published its first online game Cross Gate in Japan, mainland China, and Taiwan in 2001, and Square released Final Fantasy XI in Japan in 2002 for the PlayStation 2 and later the personal computer. With the huge success of Final Fantasy XI, the game was ported to the Xbox 360 two years later and was the first MMORPG on the console. All versions of the game used PlayOnline, a cross-platform internet gaming platform and internet service developed by Square Enix. The platform was used as the online service for many games Square Enix developed and published throughout the decade. Due to the success of their MMORPG, Square Enix began a new project called Fantasy Earth: The Ring of Dominion. GamePot, a Japanese game portal, received the license to publish Fantasy Earth in Japan, and it was released in Japan as "Fantasy Earth ZERO". In 2006, however, Square Enix dropped the Fantasy Earth Zero project and sold it to GamePot. Square Enix released Concerto Gate, the sequel to Cross Gate, in 2007. A next-gen MMORPG code named Rapture was developed by the Final Fantasy XI team using the company's Crystal Tools engine. It was unveiled at E3 2009 as Final Fantasy XIV for PlayStation 3 and Microsoft Windows and would be released on September 30, 2010. Dragon Quest X was announced in September 2011 as an MMORPG being developed for Nintendo's Wii and Wii U consoles, which released on August 2, 2012, and March 30, 2013, respectively. Like XIV, it used Crystal Tools. Square Enix also made browser games and Facebook games, like Legend World, Chocobo's Crystal Tower and Knights of the Crystals, and online games for Yahoo! Japan, such as Monster x Dragon, Sengoku Ixa, Bravely Default: Praying Brage, Star Galaxy, and Crystal Conquest. In 2013, Dragon Quest X was brought to iOS and Android in Japan using NTT DoCoMo as the release platform and Ubitus for the streaming technology. In 2014, it was also brought to 3DS in Japan using Ubitus. On May 8, 2012, Square Enix announced a collaboration with Bigpoint Games to create a free-to-play Cloud gaming platform that "throws players into 'limitless game worlds' directly through their web browser". The service was launched under the name CoreOnline in August 2012. Stating "limited commercial take-up", the service was cancelled on November 29, 2013. In September 2014, a cloud gaming company called Shinra Technologies (previously Project Flare) was created; however, it was closed in January 2016. On October 9, 2014, Square Enix launched another online game service in Japan called Dive In, which allowed players to stream console games to their iOS or Android devices. The service was monetized by the amount of time the players spent playing, with each game offered for free for thirty minutes. The service was cancelled on September 13, 2015. Some Square Enix games are available in Japan on the G-cluster streaming service. With the merger of Taito businesses into Square Enix, the company gained possession of Taito's arcade infrastructure and facilities and entered the arcade market in 2005. In 2010 Taito revealed NESiCAxLive, a cloud-based system of storing games and changing them through the internet instead of acquiring physical copies. This system was added to its many arcade gaming locations. The company continues to cater to the arcade audience in Japan with arcade-only titles, with game producers in 2015 stating that Square Enix has a loyal fan base that values the arcade gaming experience. In November 2019, Square Enix announced a "Ninja Tower Tokyo" theme park by its newly established Live Interactive Works division. The company has made three forays into the film industry. The first, Final Fantasy: The Spirits Within (2001), was produced by Square subsidiary Square Pictures before the Enix merger; Square Pictures is now a consolidated subsidiary of Square Enix. Its box-office failure caused Enix to delay the merger, which was already under consideration before the creation of the film until Square became profitable once again. In 2005, Square Enix released Final Fantasy VII Advent Children, a CGI-animation film based on the PlayStation game Final Fantasy VII, set two years after the events of the game. A Deus Ex film was in pre-production in 2012 and, as of 2014, was undergoing rewrites. In 2016 Square Enix revealed a film Kingsglaive: Final Fantasy XV based in the world of Final Fantasy XV and Brotherhood: Final Fantasy XV, a new web series released on YouTube and Crunchyroll. The company has a manga publishing division in Japan (originally from Enix) called Gangan Comics, which publishes content for the Japanese market only. In 2010, however, Square Enix launched a digital manga store for North American audiences via its Members services, which contains several notable series published in Gangan anthologies. Titles published by Gangan Comics include Fullmetal Alchemist, Soul Eater, and many others. Other titles include manga adaptations of various Square Enix games, like Dragon Quest, Kingdom Hearts, and Star Ocean. Some of these titles have also been adapted into anime series. Fullmetal Alchemist is the most successful title of Square Enix's manga branch, with more than 64 million volumes sold worldwide. It is licensed in North America by Viz Media, while its two anime adaptations were licensed by Funimation (now known as Crunchyroll) in North America. Starting in Q4 2019, Square Enix began publishing some of its manga series in English. Square Enix has created merchandise for virtually all of their video game franchises. Starting in 2000, Square Enix's former online gaming portal PlayOnline sold merchandise from game franchises including Parasite Eve, Vagrant Story, Chocobo Racing, Front Mission, Chrono Cross, and Final Fantasy. Mascots from game franchises are a popular focus for merchandise, such as the Chocobo from Final Fantasy, which has been seen as a rubber duck, a plush baby Chocobo, and on coffee mugs. Square Enix also designed a Chocobo character costume for the release of Chocobo Tales. The Slime character from Dragon Quest has also been frequently used in Square Enix merchandise, especially in Japan. On the Japanese language Square Enix shopping website, there is also a Slime-focused section called "Smile Slime". Slime merchandise includes plush toys, game controllers, figurines, and several board games, including one titled Dragon Quest Slime Racing. In Japan, pork-filled steam buns shaped like slimes have been sold in 2010. For Dragon Quest's 25th anniversary, special items were sold, including business cards, tote bags, and crystal figurines. Rabites from the Mana series have appeared in several pieces of Square Enix merchandise, including plush dolls, cushions, lighters, mousepads, straps, telephone cards, and T-shirts. Square Enix has also made merchandise for third party series, including figures Mass Effect and Halo in 2012. Beginning in 2012, it operates shops called "Square Enix Cafe" in Tokyo, Osaka and Shanghai, which display and sell merchandise, as well as serve café food. Subsidiaries Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Nergal] | [TOKENS: 10392]
Contents Nergal Nergal (Sumerian: 𒀭𒄊𒀕𒃲 dKIŠ.UNU or dGÌR.UNU.GAL; Hebrew: נֵרְגַל, Modern: Nerəgal, Tiberian: Nērəgal; Aramaic: ܢܸܪܓܲܠ; Latin: Nirgal) was a Mesopotamian god worshiped through all periods of Mesopotamian history, from Early Dynastic to Neo-Babylonian times, with a few attestations indicating that his cult survived into the period of Achaemenid domination. He was primarily associated with war, death, and disease, and has been described as the "god of inflicted death". He reigned over Kur, the Mesopotamian underworld, depending on the myth either on behalf of his parents Enlil and Ninlil, or in later periods as a result of his marriage with the goddess Ereshkigal. Originally either Mammitum, a goddess possibly connected to frost, or Laṣ, sometimes assumed to be a minor medicine goddess, were regarded as his wife, though other traditions existed, too. His primary cult center was Kutha, located in the north of historical Babylonia. His main temple bore the ceremonial name E-Meslam and he was also known by the name Meslamtaea, "he who comes out of Meslam". Initially he was only worshiped in the north, with a notable exception being Girsu during the reign of Gudea of Lagash, but starting with the Ur III period he became a major deity in the south too. He remained prominent in both Babylonia and Assyria in later periods, and in the Neo-Babylonian state pantheon he was regarded as the third most important god, after Marduk and Nabu. Nergal was associated with a large number of local or foreign deities. The Akkadian god Erra was syncretised with him at an early date, and especially in literary texts they functioned as synonyms of each other. Other major deities frequently compared to or syncretised with him include the western god Resheph, best attested in Ebla and Ugarit, who was also a god of war, plague and death, and Elamite Simut, who was likely a warrior god and shared Nergal's association with the planet Mars. It has also been proposed that his name was used to represent a Hurrian god, possibly Kumarbi or Aštabi, in early inscriptions from Urkesh, but there is also evidence that he was worshiped by the Hurrians under his own name as one of the Mesopotamian deities they incorporated into their own pantheon. Two well known myths focus on Nergal, Nergal and Ereshkigal and Epic of Erra. The former describes the circumstances of his marriage of Ereshkigal, the Mesopotamian goddess of the dead, while the latter describes his rampages and efforts of his sukkal (attendant deity) Ishum to stop them. He also appears in a number of other, less well-preserved compositions. Names and epithets Nergal's name can be translated from Sumerian as "lord of the big city", a euphemistic way to refer to him as a ruler of the underworld. The earliest attested spelling is dKIŠ.UNU, with its standard derivative dKIŠ.UNU.GAL first attested in the Old Akkadian period. Since in the Old Babylonian period the cuneiform signs KIŠ and GÌR coalesced, transliterations using the latter in place of the former can also be found in literature. The variant dNIN.KIŠ.UNU, attested in an inscription of Naram-Sin of Akkad, resulted from the use of a derivative of Nergal's name, KIŠ.UNU, as an early logographic writing of the name of Kutha, his cult center. Phonetic spellings of Nergal's name are attested in cuneiform (dné-ri-ig-lá in Old Assyrian Tell Leilan, dné-ri-ig-la in Nuzi), as well as in Aramaic (nrgl, nyrgl) and Hebrew (nēregal in the Masoretic Text). Meslamtaea, "he who has come out of Meslam", was originally used as an alternative name of Nergal in the southern part of Lower Mesopotamia up to the Ur III period. It has been proposed that it was euphemistic and reflected the fact that Nergal initially could not be recognized as a ruler of the underworld in the south due to the existence of Ninazu (sometimes assumed to be the earliest Mesopotamian god of death) and Ereshkigal, and perhaps only served as a war deity. Meslamtaea with time also came to be used as the name of a separate deity. As attested for the first time in a hymn from the reign of Ibbi-Sin, he formed a pair with Lugalirra. Due to the connection between Nergal and these two gods, who could be regarded as a pair of twins, his own name could be represented by the logogram dMAŠ.TAB.BA and its variant dMAŠ.MAŠ, both of them originally meaning "(divine) twins". dMAŠ.MAŠ is attested in Neo-Assyrian theophoric names as a spelling of Nergal's name, though only uncommonly. However, the god designated by this logogram in one of the Amarna letters, written by the king of Alashiya, is most likely Resheph instead. From the Old Babylonian period onward the name Erra, derived from the Semitic root ḥrr, and thus etymologically related to the Akkadian verb erēru, "to scorch", could be applied to Nergal, though it originally referred to a distinct god. The two of them started to be associated in the Old Babylonian period, were equated in the Weidner and An = Anum god lists, and appear to be synonyms of each other in literary texts (including the Epic of Erra and Nergal and Ereshkigal), where both names can occur side by side as designations of the same figure. However, while in other similar cases (Inanna and Ishtar, Enki and Ea) the Akkadian name eventually started to predominate over Sumerian, Erra was the less commonly used one, and there are also examples of late bilingual texts using Nergal's name in the Akkadian version and Erra's in the Sumerian translation, indicating it was viewed as antiquated and was not in common use. Theophoric names invoking Erra are only attested from Old Akkadian to Old Babylonian period, with most of the examples being Akkadian, though uncommonly Sumerian ones occur too. Despite his origin, he is absent from the inscriptions of rulers of the Akkadian Empire. The similarity between the names of Erra and Lugal-irra is presumed to be accidental, and the element -irra in the latter is Sumerian and is conventionally translated as "mighty". The logogram dU.GUR is the most commonly attested writing of Nergal's name from the Middle Babylonian period onward. This name initially belonged to Nergal's attendant deity (sukkal), and might be derived from the imperative form of Akkadian nāqaru, "destroy!". It has been noted that Ugur was replaced in his role by Ishum contemporarily with the spread of the use of dU.GUR as a writing of Nergal's name. dIGI.DU is attested as a logographic representation of Nergal's name in Neo-Babylonian sources, with the reading confirmed by the alternation between it and dU.GUR in theophoric names. However, in a number of Assyrian texts dU.GUR and dIGI.DU appear as designations of two different deities, with the former being Nergal and the latter remaining unidentified. Authors such as Frans Wiggermann and Julia Krul argue it had the Akkadian reading Pālil. However, Manfred Krebernik [de] states this remains unconfirmed. A deity designated by the logogram dIGI.DU was also worshiped in Uruk, with the earliest references coming from the reign of Sennacherib and the most recent from the Seleucid period, and according to Krul should be interpreted as "a form of Nergal". Paul-Alain Beaulieu instead argues that it is impossible to identify him as Nergal, as both of them appear alongside Ninurta as a trio of distinct deities in Neo-Babylonian sources. According to the god list An = Anum dIGI.DU could also be used as a logographic writing of the names of Ninurta (tablet VI. line 192; however, a variant lists the sumerogram dGÉSTU instead of dIGI.DU) and the Elamite deity Igišta (tablet VI, line 182; also attested in Elamite theophoric names). It could also be used to represent the names of Lugal-irra and Meslamta-ea. Beaulieu points out that in the Neo-Babylonian period two different deities whose names were rendered as dIGI.DU were worshiped in Udannu, and proposed a relation with Lugal-irra and Meslamta-ea. The single attestation of dIGI.DU as a representation of the name of Alammuš is an astronomical text is presumed to be the result of confusion between him and Ningublaga, the "Little Twins", with Lugal-Irra and Meslamtaea, the "Great Twins". Nergal also had a large number of other names and epithets, according to Frans Wiggermann comparable only to a handful of other very popular deities (especially Inanna), with around 50 known from the Old Babylonian period, and about twice as many from the later god list An = Anum, including many compounds with the word lugal, "lord". For instance, he could be referred to as "Lugal-silimma", lord of peace. A few of Nergal's titles point at occasional association with vegetation and agriculture, namely Lugal-asal, "lord (of the) poplar"; Lugal-gišimmar, "Lord (of the) date palm" (also a title of Ninurta); Lugal-šinig, "Lord (of the) tamarisk"; Lugal-zulumma, "Lord (of the) dates". However, Dina Katz stresses that these names were only applied to Nergal in late sources, and it cannot be assumed that this necessarily reflected an aspect of his character already extant earlier on. A frequently attested earlier epithet is Guanungia, "bull whose great strength cannot be repulsed", already in use the Early Dynastic period. An alternate name of Nergal listed in the Babylonian recension of the god list Anšar = Anum, de-eb-ri, reflects the Hurrian word ewri, "lord". Character Nergal's role as a god of the underworld is the already attested in an Early Dynastic Zame Hymns, specifically in the hymn dedicated to Kutha, where he is additionally associated with the so-called "Enki-Ninki deities", a group regarded as ancestors of Enlil believed to reside in the underworld. According to a hymn from the reign of Ishme-Dagan, dominion over the land of the dead was bestowed upon Nergal by his parents, Enlil and Ninlil. He was believed to decide fates of the dead the same way as Enlil did for the living. In one Old Babylonian adab song Nergal is described as "Enlil of the homeland (kalam) and the underworld (kur)". He was also occasionally referred to as Enlil-banda, "junior Enlil", though this title also functioned as an epithet of the god Enki. In addition to being a god of the underworld, Nergal was also a war god, believed to accompany rulers on campaigns, but also to guarantee peace due to his fearsome nature serving as a deterrent. In that capacity he was known as Lugal-silimma, "lord of peace". He was also associated with disease. As summed up by Frans Wiggermann, his various domains make him the god of "inflicted death". He also played an important role in apotropaic rituals, in which he was commonly invoked to protect houses from evil. Fragments of tablets containing the Epic of Erra, a text detailing his exploits, were used as amulets. Nergal was associated with Mars. Like him, this planet was linked with disease (especially kidney disease) in Mesopotamian beliefs. However, Mars was also associated with other deities: Ninazu (under the name "the Elam star"), Nintinugga, and especially Simut, in origin an Elamite god. The name of the last of these figures in Mesopotamian sources could outright refer to the planet (mulSi-mu-ut, "the star Simut"). A number of scholars in the early 20th century, for example Emil Kraeling, assumed that Nergal was in part a solar deity, and as such was sometimes identified with Shamash. Kraeling argued that Nergal was representative of a certain phase of the sun, specifically the sun of noontime and of the summer solstice that brings destruction, high summer being the dead season in the Mesopotamian annual cycle. This view is no longer present in modern scholarship. While some authors, for example Nikita Artemov, refer to Nergal as a deity of "quasi-solar" character, primary sources show a connection between him and sunset rather than noon. For instance, an Old Babylonian adab song contains a description of Nergal serving as a judge at sunset, while another composition calls him the "king of sunset". This association is also present in rituals meant to compel ghosts to return to the underworld through the gates to sunset. Nergal's role as a war god was exemplified by some of his attributes: mace, dagger and bow. A mace with three lion-shaped heads and a scimitar adorned with leonine decorations often appear as Nergal's weapons on cylinder seals. He was also often depicted in a type of flat cap commonly, but not exclusively, worn by underworld deities in Mesopotamian glyptic art. Bulls and lions were associated with Nergal. On the basis of this connection it has been proposed that minor deities with bull-like ears on Old Babylonian terracotta plaques and cylinder seals might have been depictions of unspecified members of Nergal's entourage. An entry in the explanatory god list An = Anu ša amēli seemingly associates Nergal with chameleons, as his title Bar-MUŠEN-na, explained as "Nergal of rage" (ša uzzi) is like a scribal mistake for bar-gun3-(gun3)-na ("the one with a colorful exterior"), presumed to be the Akkadian term for chameleon; Ryan D. Winters suggests that the animal's color changing might have been associated with mood swings or choleric temperament, and additionally that it was perceived as a "chthonic" being. War standards could serve as a symbolic representation of Nergal too, and the Assyrians armies in particular were often accompanied by such devotional objects during campaigns. A similar symbol also represented Nergal on kudurru, inscribed boundary stones. Associations with other deities The god most closely associated with Nergal was Erra, whose name was Akkadian rather than Sumerian and can be understood as "scorching". Two gods with names similar to Erra who were also associated with Nergal were Errakal and Erragal. It is assumed that they had a distinct origin from Erra. Ninazu was seemingly already associated with Nergal in the Early Dynastic period, as a document from Shuruppak refers to him as "Nergal of Enegi", his main cult center. The city itself was sometimes called "Kutha of Sumer". In later times, especially in Eshnunna, he started to be viewed as a son of Enlil and Ninlil and a warrior god, similar to Nergal. Many minor gods were associated or equated with Nergal. The god Shulmanu, known exclusively from Assyria, was associated with Nergal and even equated with him in god lists. Lagamar (Akkadian: "no mercy"), son of Urash (the male tutelary god of Dilbat) known both from lower Mesopotamian sources and from Mari and Susa is glossed as "Nergal" in the god list An = Anum. Lagamar, Shubula and a number of other deities are also equated with Nergal in the Weidner god list. Luhusha (Sumerian: "angry man"), worshiped in Kish, was referred to as "Nergal of Kish". Emu, a god from Suhum located on the Euphrates near Mari, was also regarded as Nergal-like. He is directly identified as "Nergal of Sūḫi" in the god list Anšar = Anum, and might be either the same deity as the poorly attested Âmûm (a-mu, a-mu-um or a-mi-im) known from Mari, or alternatively a local derivative of the sea god Yam, possibly introduced to this area by people migrating from further west; Ryan D. Winters notes in the latter case the association would presumably reflect Nergal's epithet lugala'abba, "king of the sea". Nergal was on occasion associated with Ishtaran, and in this capacity he could be portrayed as a divine judge. However, as noted by Jeremiah Peterson, this association is unusual as Nergal was believed to act as a judge in locations where the sun sets in mythological texts, while on the account of Der's location Ishtaran was usually associated with the east, where the sun rises. Enlil and Ninlil are attested as Nergal's parents in the overwhelming majority of sources, and while in the myth Nergal and Ereshkigal he addresses Ea as "father", this might merely be a honorific, as no other evidence for such an association exists. In the myth Enlil and Ninlil Nergal's brothers are Ninazu (usually instead a brother of Ninmada), Nanna and Enbilulu. In a single text, a Neo-Babylonian letter from Marad, his brothers are instead Nabu and Lugal-Marada, the tutelary god of this city. However, this reference is most likely an example of captatio benevolentiae, a rhetorical device meant to secure the goodwill of the reader, rather than a statement about genealogy of deities. Multiple goddesses are attested as Nergal's wife in various time periods and locations, but most of them are poorly defined in known documents. While Frans Wiggermann assumes that all of them were understood as goddesses connected to the earth, this assumption is not shared by other assyriologists. Laṣ, first attested in an offering list from the Ur III period mentioning various deities from Kutha, was the goddess most commonly regarded as Nergal's spouse, especially from the Kassite and middle Assyrian periods onward. She received offerings from neo-Babylonian kings alongside Nergal in Kutha. Her name is assumed to have its origin in a Semitic language, but both its meaning and Laṣ' character are unknown. Based on the Weidner god list, Wilfred G. Lambert proposes that she was a medicine goddess. Couples consisting of a warrior god and a medicine goddess (such as Pabilsag and Ninisina or Zababa and Bau) were common in Mesopotamian mythology. Another goddess often viewed as the wife of Nergal was Mammitum. Her name is homophonous with Mami, a goddess of birth known for example from the Nippur god list, leading some researchers to conflate them. However, it is generally accepted that they were separate deities, and they are kept apart in Mesopotamian god lists. Multiple meanings have been proposed for her name, including "oath" and "frost" (based on a similar Akkadian word, mammû, meaning "ice" or "frost"). It is possible she was introduced in Kutha alongside Erra. In at least one text, a description of a New Year ritual from Babylon during which the gods of Kish, Kutha and Borsippa were believed to visit Marduk (at the time not yet a major god), both she and Laṣ appear side by side as two separate goddesses. In the Nippur god list Laṣ occurs separately from Nergal, while Mammitum is present right behind him, which along with receiving offerings alongside him in Ekur in the same city in the Old Babylonian lead researches to conclude a spousal relation existed between them. She is also the wife of Erra/Nergal in the Epic of Erra. The Middle Babylonian god list An = Anum mentions both Laṣ and Mamitum, equating them with each other, and additionally calls the goddess Admu ("earth") Nergal's wife. She is otherwise only known from personal names and a single offering list from Old Babylonian Mari. In third millennium BCE in Girsu, the spouse of Nergal (Meslamtaea) was Inanna's sukkal Ninshubur, otherwise seemingly viewed as unmarried. Attestations of Ninshubur as Nergal's sukkal are also known, though they are infrequent. According to the myth Nergal and Ereshkigal he was married to Ereshkigal, the goddess of the dead. In god lists, however, they do not appear as husband and wife, though there is evidence that their entourages started to be combined as early as in the Ur III period. Ereshkigal's importance in Mesopotamia was largely limited to literary, rather than cultic, texts. Nergal's daughter was Tadmushtum, a minor underworld goddess first attested in Drehem in the Ur III period. In an offering list she appears alongside Laṣ. Her name has Akkadian origin, possibly being derived from the words dāmasu ("to humble") or dāmašu (connected to the word "hidden"), though more distant cognates were also proposed, including Geʽez damasu ("to abolish", "to destroy", or alternatively "to hide"). It has also been proposed that a linguistic connection existed between her and the Ugaritic goddess Tadmish (or Dadmish, ddmš in the alphabetic script), who in some of the Ugaritic texts occurs alongside Resheph, though a copy of the Weidner god list from Ugarit however equates Tadmish with Shuzianna rather than Tadmushtum. In Neo-Babylonian lists of so-called "Divine Daughters", pairs of minor goddesses associated with specific temples likely viewed as daughters of their head gods, the "Daughters of E-Meslam" from Kutha are Dadamushda (Tadmushtum) and Belet-Ili. While Frans Wiggermann and Piotr Michalowski additionally regard the god Shubula as Nergal's son, it is actually difficult to determine if such a relation existed between these two deities due to the poor preservation of the tablet of the god list An = Anum where Shubula's position in the pantheon was specified. Shubula might have been a son of Ishum rather than Nergal. He was an underworld god and is mostly known from personal names from the Ur III and Isin-Larsa periods. His name is most likely derived from the Akkadian word abālu ("dry"). There is also clear evidence that he was regarded as Tadmushtum's husband. Nergal's sukkal (attendant deity) was initially the god Ugur, possibly the personification of his sword. After the Old Babylonian period he was replaced in this role by Ishum. Sporadically Inanna's sukkal Ninshubur or Ereshkigal's sukkal Namtar were said to fulfill this role in the court of Nergal instead. His other courtiers included umum, so-called "day demons", who possibly represented points in time regarded as inauspicious; various minor deities associated with diseases; the minor warrior gods known as Sebitti; and a number of figures at times associated with Ereshkigal and gods such as Ninazu and Ningishzida as well, for example Namtar's wife Hushbisha, their daughter Hedimmeku, and the deified heroes Gilgamesh and Etana (understood as judges of the dead in this context). In some texts the connection between Gilgamesh in his underworld role and Nergal seems to be particularly close, with the hero being referred to as "Nergal's little brother". Resheph, a western god of war and plague, was already associated with Nergal in Ebla in the third millennium BCE, though the connection was not exclusive, as he also occurs in contexts which seem to indicate a relation with Ea (known in Ebla as Hayya) instead. Furthermore, the Eblaite scribes never used Nergal's name as a logographic representation of Resheph's. According to Alfonso Archi, it is difficult to further speculate about the nature of Resheph and his relation to other deities in Eblaite religion due to lack of information about his individual characteristics. The equivalence between Nergal and the same western gods is also known from Ugarit, where Resheph was additionally associated with the planet Mars, much like Nergal in Mesopotamia. Documents from Emar on the Euphrates mention a god called "Nergal of the KI.LAM" (seemingly a term designating a market), commonly identified with Resheph by researchers. Additionally, "Lugal-Rasap" functioned as a title of Nergal in Mesopotamia according to god lists. It has been proposed that in Urkesh, a Hurrian city in northern Syria, Nergal's name was used to represent a local deity of Hurrian origin logographically. Two possible explanations have been proposed: Aštabi and Kumarbi. The former was a god of Eblaite origin, later associated with Ninurta rather than Nergal, while the latter was the Hurrian "father of the gods", usually associated with Enlil and Dagan. Gernot Wilhelm [de] concludes in a recent publication that the identification of Nergal in the early Urkesh inscriptions as Kumarbi is not implausible, but at the same remains impossible to conclusively prove. He points out that it is also not impossible that Kumarbi only developed as a distinct deity at a later point in time. Alfonso Archi notes that it also possible the god meant is Nergal himself, as he is attested in other Hurrian sources as an actively worshiped deity. In the Yazılıkaya sanctuary, Nergal's name was apparently applied to a so-called "sword god" depicted on one of the reliefs, most likely a presently unidentified local god of death. The Elamite god Simut was frequently associated with Nergal, shared his association with the planet Mars and possibly his warlike character, though unlike his Mesopotamian counterpart he was not an underworld deity. In one case he appears alongside Laṣ. Wouter Henkelman additionally proposes that "Nergal of Hubshal (or Hubshan)" known from Assyrian sources was Simut. However, other identities of the deity identified by this moniker have been proposed as well, with Volkert Haas instead identifying him as Ugur. Yet another possibility is that Emu was the deity meant. Based on lexical lists, two Kassite gods were identified with Nergal, Shugab and Dur. In a Middle Assyrian god list, "Kammush" appears among the epithets of Nergal. According to Wilfred G. Lambert it cannot be established whether this indicates an equation with either the third millennium BCE god Kamish known from the Ebla texts, or the Iron Age god Chemosh from Moab. In late, Hellenistic sources from Palmyra, Hatra and Tarsus Heracles served as the interpretatio graeca of Nergal. Heracles and Nergal were also both (at different points in time) associated with the Anatolian god Sandas, as well as the Tyrian god Melqart, whose name also translates to "king of the city." Worship Nergal's main cult center was Kutha, where his temple E-Meslam was located. Andrew R. George proposes the translation "house, warrior of the netherworld" for its name. A secondary name of the E-Meslam was E-ḫuškia, "fearsome house of the underworld". It is already attested in documents from the reign of Shulgi, don whose orders repair work was undertaken there. Later monarchs who also rebuilt it include Apil-Sin, Hammurabi, Ashurbanipal and Nebuchadnezzar II. It continued to function as late as in the Seleucid period. In addition to Kutha, Apak (Apiak) is well attested as a major cult center of Nergal, already attested in documents from the Sargonic period. Its precise location is not known, but it has been established that it was to the west of Marad. In this city, he could be referred to as Lugal-Apiak. While absent from Assyria in the Akkadian period, later he rose to the status of one of the most major gods there. Tarbishu was a particularly important Assyrian cult center of both Nergal and his wife Laṣ. His temple in this city, originally built by Sennacherib, also bore the name E-Meslam. A third temple named E-Meslam was located in Mashkan-shapir according to documents from the reign of Hammurabi, and it is possible it was dedicated to Nergal too. The veneration of Nergal in this city is well documented. Naram-Sin of Akkad was particularly devoted to Nergal, describing him as his "caretaker" (rābisu) and himself as a "comrade" (rū'um) of the god. At the same time, worship of Nergal in the southernmost cities of Mesopotamia was uncommon in the third millennium BCE, one exception being the presence of "Meslamtaea" in Lagash in Gudea's times. This changed during the reign of Shulgi, the second king from the Third Dynasty of Ur. Theological texts from this period indicate that Nergal was viewed as one of the major gods and as king of the underworld. Tonia Sharlach proposes that "Nergal of TIN.TIRki" known from this period should be understood as the original tutelary god of Babylon. This interpretation is not supported by Andrew R. George, who notes that Nergal of TIN.TIRki is usually mentioned alongside Geshtinanna of KI.ANki, Ninhursag of KA.AM.RIki, and other deities worshiped in settlements located in the proximity of Umma, and on this basis he argues that this place name should be read phonetically as Tintir and refers to a small town administered directly from said city, and not to Babylon, whose name could be written logographically as TIN.TIRki in later periods. Other authors agree that the worship of Nergal is well attested in the area around Umma. George additionally points out that there is no indication that Babylon was regarded as a major cult center of Nergal in any time period. In the Old Babylonian period Nergal continued to be worshiped as a god of the dead, as indicated for example by an elegy in which he appears alongside Ningishzida, Etana and Bidu, the gatekeeper of the underworld. He appears for the first time in documents from Uruk in this period. Anam of Uruk built a temple dedicated to him in nearby Uzurpara during the reign of Sîn-gāmil. It is possible that it bore the name E-dimgalanna, "house, great bond of heaven". Multiple temples of other deities (Sud, Aya and Nanna) bearing the same name are attested from other locations as well. Damiq-ilishu of Isin also built a temple of Nergal in this location, the E-kitušbidu, "house whose abode is pleasant". In Uruk itself, Nergal had a small sanctuary, possibly known as E-meteirra, "house worthy of the mighty one". A temple bearing this name was rebuilt by Kudur-Mabuk at one point. Nergal continued to be worshiped in Uruk as late as in early Achaemenid times, and he is mentioned in a source from the 29th year of the reign of Darius I. One late document mentions an oath taken in the presence of a priest (sanga) of Nergal during the sale of a prebend in which Nergal and Ereshkigal were invoked as divine witnesses. Ancient lists of temples indicate that a temple of Nergal bearing the name E-šahulla, "house of the happy heart", was located in Mê-Turan. It was identified during excavations based on brick inscriptions and votive offerings dedicated to Nergal. It shared its name with a temple of Nanaya located in Kazallu. According to Andrew R. George, its name was most likely a reference to the occasional association between Nergal and joy. For example, a street named "the thoroughfare of Nergal of Joy" (Akkadian: mūtaq Nergal ša ḫadê) existed in Babylon, while the god list An = Anum ša amēli mentions "Nergal of jubilation" (dU.GUR ša rišati). In Lagaba, Nergal was worshiped under the name Išar-kidiššu. He could also be referred to as the tutelary god of Marad, though this city was chiefly associated with Lugal-Marada. Offerings or other forms of worship are also attested from Dilbat, Isin, Larsa, Nippur and Ur. It is possible that a temple of Nergal bearing the name E-erimḫašḫaš, "house which smites the wicked", which was at one point rebuilt by Rim-Sîn I, was located in the last of these cities. Temples dedicated to him also existed in both Isin and Nippur, but their names are not known. In the Neo-Babylonian period Nergal was regarded as the third most important god in the Babylonian state pantheon after Marduk and Nabu. These three gods often appear together in royal inscriptions. Based on a cylinder of Neriglissar providing for E-Meslam in Kutha was regarded as a royal duty, similar as in the case of Marduk's and Nabu's main temples (respectively E-Sagil in Babylon and E-Zida in Borsippa). However, administrative documents indicate that Nergal and his wife Laṣ received fewer offerings than Marduk or Nabu. In some families it was seemingly customary to give the third son a theophoric name invoking Nergal, in accordance with his position in the state pantheon. 14th and 28th days of the month were regarded as sacred to Nergal, as was the number 14 itself, though it was also associated with Sakkan. Unlike other Mesopotamian deities associated with the underworld (for example Ereshkigal), Nergal is well attested in theophoric names. Nergal was also incorporated into the pantheon of the Hurrians, and it has been argued he was among the earliest foreign gods they adopted. He is one of the gods considered to be "pan-Hurrian" by modern researchers, a category also encompassing the likes of Teshub, Shaushka or Nupatik. He is already attested in the inscriptions of two early Hurrian kings of Urkesh, Tish-atal and Atal-shen. An inscription of the former is the oldest known text in Hurrian: Tish-atal, endan of Urkesh, has built a temple of Nergal. May the god Lubadaga protect this temple. Who destroys it, [him] may Lubadaga destroy. May the weather god not hear his prayer. May the mistress of Nagar, the sun-god, and the weather-god [...] him who destroys it. The sun god and the weather god in this inscription are most likely Hurrian Shimige and Teshub. Atal-shen referred to Nergal as the lord of a location known as Hawalum: Of Nergal the lord of Hawalum, Atal-shen, the caring shepherd, the king of Urkesh and Nawar, the son of Sadar-mat the king, is the builder of the temple of Nergal, the one who overcomes opposition. Let Shamash and Ishtar destroy the seeds of whoever removes this tablet. Shaum-shen is the craftsman. Giorgio Buccellati in his translation quoted above renders the names of the other deities invoked as Shamash and Ishtar, but according to Alfonso Archi the logograms dUTU and dINANNA should be read as Shimige and Shaushka in this case. The worship of Nergal is also well attested in the eastern Hurrian settlements. These include Arrapha, referred to as the "City of the Gods", which was located near modern Kirkuk, as well as Ḫilamani, Tilla and Ulamme, where an entu priestess dedicated to him resided. In the last three of these cities, he was associated with a goddess referred to as "dIŠTAR Ḫumella", the reading and meaning of whose name are unclear. Mythology Two versions of the myth Nergal and Ereshkigal are known, one from a single Middle Babylonian copy found in Amarna, seemingly copied by a scribe whose native language was not Akkadian and another known from Sultantepe and from Uruk, with copies dated to the Neo-Assyrian and Neo-Babylonian periods, respectively. The time of original composition is uncertain, with proposed dates varying from Old Babylonian to Middle Babylonian times. Whether a Sumerian original existed is unknown, and the surviving copies are all written in Akkadian. After Nergal fails to pay respect to Ereshkigal's sukkal Namtar during a feast where he acts as a proxy of his mistress, who cannot leave the underworld to attend, she demands to have him sent to the underworld to answer for it. The El Amarna version states that she planned to kill Nergal, but this detail is absent from the other two copies. Nergal descends to the underworld, but he's able to avoid many of its dangers thanks to advice given to him by Ea. However, he ignores one of them, and has sex with Ereshkigal. After six days he decides to leave while Ereshkigal is asleep. After noticing this she dispatches Namtar, and demands the other gods to convince Nergal to return again, threatening to open the gates of the underworld if she does not get what she asks for. Nergal is handed over to her again. In the Amarna version, where Ereshkigal initially planned to kill Nergal, he defeats Namtar and prepares to kill Ereshkigal. To save herself, she suggests that they can get married and share the underworld. The other two known copies give the myth a happy ending: as noted by assyriologist Alhena Gadotti, "the two deities seem to reunite and live happily ever after", and the myth concludes with the line "they impetuously entered the bedchamber". According to assyriologists such as Stephanie Dalley the purpose of this narrative was most likely to find a way to reconcile two different views of the underworld, one from the north centered on Nergal, and another from the south centered on Ereshkigal. Tikva Frymer-Kensky's attempt at interpreting it as evidence of "marginalization of goddesses" is regarded as erroneous. According to Alhena Gadotti the idea that Ereshkigal was supposed to share kingship over the underworld with her spouse is also known from the Old Babylonian composition Gilgamesh, Enkidu and the Underworld, in which Anu and Enlil give the underworld to her "as a dowry, her portion of the paternal estate's inheritance, which she controlled until she married". It is however impossible to tell which of the three gods regarded as Ereshkigal's husbands in various sources was implicitly meant to be the recipient of the dowry in this composition—Gugalanna, Nergal, or Ninazu. The oldest surviving copies of the Epic of Erra come from the Assyrian city of Nineveh and have been dated to the seventh century BCE, but it has been argued that the composition is between 100 and 400 years older than that based on possible allusions to historical events which occurred during a period of calamity which Babylonia experienced roughly between eleventh and eighth centuries BCE. A colophon indicates that it was compiled by a certain Kabti-ilani-Marduk, which constitutes an uncommon of example of a direct statement of authorship of an ancient Mesopotamian text. Nergal (the names Nergal and Erra are both used to refer to the protagonist of the narrative) desires to wage war to counter a state of inertia he found himself in. His weapons (the Sebitti) urge him to take action, while his sukkal Ishum, who according to Andrew R. George appears to play the role of Nergal's conscience in this myth, attempts to stop him. Nergal dismisses the latter, noting that it is necessary to regain respect in the eyes of humans, and embarks on a campaign. His first goal is Babylon. Through trickery he manages to convince Marduk (portrayed as a ruler past his prime, rather than as a dynamic hero, in contrast with other compositions) to leave his temple. However, Marduk returns too soon for Nergal to successfully start his campaign, and as a result in a long speech he promises to give other gods a reason to remember him. As a result of his declaration (or perhaps because of Marduk's temporary absence), the world seemingly finds itself in a state of cosmic chaos. Ishum once again attempts to convince Nergal to stop, but his pleading does not accomplish its goal. Nergal's acts keep escalating and soon Marduk is forced to leave his dwelling again, fully leaving the world at Nergal's mercy. A number of graphic descriptions of the horrors of war focused on nameless humans suffering because of Nergal's reign of terror follow. This is still not enough, and he declares his next goal is to destroy the remaining voices of moderation, and the cosmic order as a whole. However, Ishum eventually manages to bring an end to the bloodshed. He does so by waging a war himself, targeting the inhabitants of Mount Sharshar, seemingly a site associated with the origin of the aforementioned period of chaos in the history of late second and early first millennium BCE Babylonia. Ishum's war is described in very different terms to Nergal's, and with its end the period of instability comes to a close. Nergal is seemingly content with the actions of his sukkal and with hearing the other gods acknowledge the power of his rage. The narrative ends with Nergal instructing Ishum to spread the tale of his rampage, but also to make it clear that only thanks to his calming presence the world was spared. A poorly preserved Middle Assyrian composition, regarded as similar to the Labbu myth, seemingly describes a battle between Nergal (possibly acting on behalf of his father Enlil or the sky god Anu) and a monstrous serpent born in the sea. The myth Enmesharra's Defeat, only known from a single, heavily damaged copy from the Seleucid or Parthian period, casts Nergal as the warden of the eponymous antagonist and his seven sons, the Sebitti, presumably imprisoned in the underworld. In the surviving fragments Enmesharra unsuccessfully pleads with him to be released to avoid being put to death for his crimes at the orders of Marduk. In the aftermath of the ordeal, the universe is reorganized and Marduk shares lordship over it, which seemingly originally belonged to Anu in this composition, with Nergal and Nabu. Wilfred G. Lambert notes these gods were the 3 most prominent deities in the neo-Babylonian state pantheon. Curiously, Erra makes a brief appearance as a god distinct from Nergal, with his former sphere of influence reassigned to the latter. Andrew R. George proposes that a myth presently unknown from textual records dealt with Nergal's combat with a one-eyed monster, the igitelû. He notes that Akkadian omen texts from Susa and from the Sealand archives appears to indicate that one-eyed creatures were known as igidalu, igidaru or igitelû, possibly a loanword from Sumerian igi.dili ("one eye"), and that the only god associated with them was Nergal, who in one such omen texts is identified as the slayer of an igitelû. There is also evidence that birth of one-eyed animals was regarded as an omen connected to Nergal. He proposes that a relief originally excavated in Khafajah (ancient Tubub) depicting a god stabbing a one-eyed monster with rays of light emenating from his head might be a pictorial representation of this hypothetical myth, though other interpretations have been proposed too, including Marduk killing Tiamat and Ninurta killing Asag. However, neither of these found widespread support, and art historian Anthony Green in particular showed skepticism regarding them, noting art might preserve myths not known from textual record. Wilfred G. Lambert suggested that the cyclops in mention might instead be a depiction of Enmesharra, based on his description as a luminous deity in Enmesharra's Defeat. Later relevance Nergal is mentioned in the Book of Kings as the deity of the city of Cuth (Kutha): "And the men of Babylon made Succoth-benoth, and the men of Cuth made Nergal" (2 Kings, 17:30). According to the rabbinic tradition, he was associated with the image of a foot or a rooster. In Mandaean cosmology, the name for Mars is Nirig (ࡍࡉࡓࡉࡂ), a derivative of Nergal, which is a part of a recurrent pattern of Mandaean names of celestial bodies being derived from names of Mesopotamian deities. Victorian lexicographer E. Cobham Brewer asserted that the name of Nergal, who he identified as "the most common idol of ancient Phoenicians, Indians and Persians", meant "dunghill cock". This translation is incorrect in the light of modern research, as Nergal's name most likely was understood as "Lord of the big city", his emblematic animals were bulls and lions, while chickens were unknown in Mesopotamia prior to the ninth century BCE based on archeological data, and left behind no trace in cuneiform sources. References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Basic_Law:_Israel_as_the_Nation-State_of_the_Jewish_People] | [TOKENS: 4672]
Contents Basic Law: Israel as the Nation-State of the Jewish People Basic Law: Israel as the Nation-State of the Jewish People (Hebrew: חוֹק יְסוֹד: יִשְׂרָאֵל—מְדִינַת הַלְּאוֹם שֶׁל הָעָם הַיְּהוּדִי), informally known as the Nation-State Bill (חוֹק הַלְּאוֹם) or the Nationality Bill, is an Israeli Basic Law that specifies the country's significance to the Jewish people. It was passed by the Knesset—with 62 in favour, 55 against, and two abstentions—on 19 July 2018 (7 Av 5778) and is largely symbolic and declarative in nature. The law outlines a number of roles and responsibilities by which Israel is bound in order to fulfill the purpose of serving as the Jews' nation-state. However, it was met with sharp backlash internationally and has been characterized as racist and undemocratic by some critics. After it was passed, several groups in the Jewish diaspora expressed concern that it was actively violating Israel's self-defined legal status as a "Jewish and democratic state" in exchange for adopting an exclusively Jewish identity. The European Union stated that the Nation-State Bill had complicated the Israeli–Palestinian peace process, while the Arab League, the Palestine Liberation Organization, the Organization of Islamic Cooperation, and the Muslim World League condemned it as a manifestation of apartheid. Petitions were filed with the Supreme Court of Israel challenging the constitutionality of the law. In January 2019, the Supreme Court announced that such challenges would be heard by an 11-justice panel and would decide if the law, in whole or in part, violates Israel's Basic Law: Human Dignity and Liberty, which was passed by the Knesset with super-legal status in 1992. Additionally, the hearing would also be the first time that the Supreme Court addressed the question of whether it had the authority to strike down another Basic Law on the basis of threats to constitutionality. In July 2021, the Supreme Court ruled that the law was constitutional and did not negate Israel's democratic character. Writing the opinion for the majority, Esther Hayut, the erstwhile President of the Supreme Court, stated that this "Basic Law is but one chapter in our constitution taking shape and it does not negate Israel's character as a democratic state." The court's majority opinion concurred with arguments that the law merely declares the obvious—that Israel is a Jewish state—and that this does not detract from the individual rights of non-Jewish citizens, especially in light of other laws that ensure equal rights to all. Legislation history On 3 August 2011, the Chairman of the Foreign Affairs and Defense Committee, Avi Dichter, together with 39 other Knesset members, filed the Basic Law proposal: Israel as the Nation-State of the Jewish People which seeks to determine the nature of the state of Israel as the Jewish people, and as such it interprets the term "Jewish and democratic state" which appears in the Israeli basic laws Freedom of Occupation and Human Dignity and Liberty. In July 2017, a special Joint committee headed by MK Amir Ohana (Likud) was formed to revive the Nation-State Bill,[citation needed] which was then approved for a first reading on 13 March 2018. The committee oversaw a number of changes, mostly regarding articles such as the "Hebrew Law", "Ingathering of the Exiles", and "Jewish Settlement", replacing an earlier version that would have enabled the state to allow groups to establish separate communities "on the basis of religion and nationality" with a version that emphasised "developing Jewish communities a national value, and will act to encourage, promote, and establish them". Upon presenting the reformed bill, Chairman Ohana stated: "This is the law of all laws. It is the most important law in the history of the State of Israel, which says that everyone has human rights, but national rights in Israel belong only to the Jewish people. That is the founding principle on which the state was established". Minister Yariv Levin, a strong backer of the proposal, called it "Zionism's flagship bill... it will bring order, clarify what is taken for granted, and put Israel back on the right path. A country that is different from all others in one way, that it is the nation-state of the Jewish people." On 1 May 2018, the Knesset passed the Nation-State Bill, with a majority of 64 voting in favor of the bill and 50 against in its first reading. On 19 July 2018, after a stormy debate which lasted for hours, the Knesset approved the Nation-State Bill in second and third readings by a vote of 62 in favor, 55 against and two abstentions. Following the vote, members of the Joint List tore up a printed text of the law while shouting out "Apartheid" on the floor of the Knesset. MKs from the coalition, on the other hand, applauded the passing of the legislation. Content of the Basic Law The Basic Law comprises eleven clauses, as follows: 1 — Basic Principles A. The land of Israel is the historical homeland of the Jewish people, in which the State of Israel was established. B. The State of Israel is the national home of the Jewish people, in which it fulfills its natural, cultural, religious, and historical right to self-determination. C. The right to exercise national self-determination in the State of Israel is unique to the Jewish people. 2 — Symbols of the State A. The name of the state is "Israel". B. The state flag is white, with two blue stripes near the edges and a blue Star of David in the center. C. The state emblem is a seven-branched menorah with olive leaves on both sides and the word "Israel" beneath it. D. The state anthem is "Hatikvah". E. Details regarding state symbols will be determined by the law. 3 — Capital of the State Jerusalem, complete and united, is the capital of Israel. 4 — Language A. The state's language is Hebrew. B. The Arabic language has a special status in the state; Regulating the use of Arabic in state institutions or by them will be set in law. C. This clause does not harm the status given to the Arabic language before this law came into effect. 5 — Ingathering of the Exiles The state will be open for Jewish immigration and the ingathering of exiles. 6 — Connection to the Jewish people A. The state will strive to ensure the safety of the members of the Jewish people and of its citizens in trouble or in captivity due to the fact of their Jewishness or their citizenship. B. The state shall act within the Diaspora to strengthen the affinity between the state and members of the Jewish people. C. The state shall act to preserve the cultural, historical, and religious heritage of the Jewish people among Jews in the Diaspora. 7 — Jewish Settlement A. The state views the development of Jewish settlement as a national value and will act to encourage and promote its establishment and consolidation. 8 — Official Calendar The Hebrew calendar is the official calendar of the state and alongside it the Gregorian calendar will be used as an official calendar. Use of the Hebrew calendar and the Gregorian calendar will be determined by law. 9 — Independence Day and Memorial Days A. Independence Day is the official national holiday of the state. B. Memorial Day for the Fallen in Israel's Wars and Holocaust and Heroism Remembrance Day are official memorial days of the State. 10 — Days of Rest and Sabbath The Sabbath and the festivals of Israel are the established days of rest in the state; Non-Jews have a right to maintain days of rest on their Sabbaths and festivals; Details of this issue will be determined by law. 11 — Immutability This Basic Law shall not be amended, unless by another Basic Law passed by a majority of Knesset members. Litigation In July 2018 Member of Knesset Akram Hasson (Kulanu) and other Israeli Druze officials filed a petition with the Supreme Court of Israel challenging the constitutionality of the law. This was followed in January 2019 by a petition filed by the Association for Civil Rights in Israel. The Supreme Court announced that challenges to the constitutionality of the law would be heard by an 11-justice panel and would decide if the law, in whole or in part, violates the Basic Law: Human Dignity and Liberty, considered the country’s foundational legal basis. The hearing would be the first time the Supreme Court addressed the question of whether it has the authority to strike down another Basic Law in whole or in part on such a basis. The Supreme Court issued its decision on the constitutionality of the law in July 2021. In a 10-1 ruling, the court declared that the law was constitutional and did not negate the state’s democratic character. Writing the opinion for the majority, President of the Court, Esther Hayut, stated that "This basic law is but one chapter in our constitution taking shape and it does not negate Israel's character as a democratic state." The court's majority opinion concurred with arguments that the law merely declares the obvious—that Israel is a Jewish state—and that this does not detract from the individual rights of non-Jewish citizens, especially in light of other laws that ensure equal rights to all. The lone dissenting judge was Justice George Karra, an Arab member of the court. In a separate case, in November 2020, an Israeli magistrate's court ruled, based on the law as justification, that the northern city of Karmiel was a "Jewish city", and that Arabic-language schools or funding transport for Arab schoolchildren would be liable to alter the city’s demographic balance and damage its character. The ruling essentially blocked access to schools for Arab children in Karmiel. The court implied that facilitating this access would incentivize Palestinian Arab citizens of Israel to move into the city, thus damaging its "Jewish character." Israel's attorney general opposed the ruling and stated that the court had interpreted the law incorrectly. Upon appeal, the Haifa District Court ruled that the lower court's initial dismissal of the claims for funding and transportation were an inappropriate application of the Nation-State law, and called the decision "fundamentally wrong." Controversy Controversy has surrounded the Basic Law since it was first proposed in 2011. Prominent Israeli political, especially from the left of the political spectrum, and academic figures, such as Professor Amnon Rubinstein, have been highly critical, and frequent references have been made to the potential harm that the passage of the law's bill could do to Israel's democracy and the rights of its minorities. The proposal has even been criticized by people affiliated with the Israeli Right, such as the Minister and Likud Party MK Benny Begin. Critics have argued that the proposed law raises difficult questions concerning the definition of Israel as a Jewish and democratic state, and it may upset the delicate balance between the state's Jewish character and state's democratic character. On 20 November 2011, a special discussion was held on the matter at the George Shultz Roundtable Forum which was sponsored by the Israeli Democracy Institute, and was attended by Avi Dichter and various Israeli public figures and prominent academic figures. Prime Minister of Israel, Benjamin Netanyahu, defended his draft of the Nation-State bill on 26 November 2014, declaring Israel to be "The nation-state of the Jewish people, and the Jewish people alone". He also clarified: "I want a state of one nation: the Jewish nation-state, which includes non-Jews with equal rights." Being the land of the Jewish people, the PM is of the opinion that Israel is thus entitled to principles that combine the nation and the state of the Jewish people and grant "equal rights for all its citizens, without discrimination against religion, race, or sex". In August 2011, several Knesset members who had initially signaled their support for the bill subsequently withdrew their support after controversy arose over the downgrading of the Arabic language and concerns that the bill would fail to properly enshrine minority rights. MK Shlomo Molla (Kadima) conditioned his signature, stating: "Israel is the national home of the Jewish people, that much is clear. But at the same time, when we are the Jewish majority, the rights of the minority must also be enshrined in the Basic Law and they need legal protection. Without the completion of such a Basic Law, it has no moral validity. Nor will antagonism arise." MK Avi Dichter (Likud) countered that the law only enshrined an existing situation, noting: "Court rulings deal constantly with the permanent status of the language: the Hebrew language is defined as a language with a higher status than the Arabic language, and as the state's official language. Arabic on the other hand suffers from constant blurring of its status and lack of clarity about its accessibility to the native speakers of the language. According to the bill proposal, the Arabic language would receive a special status which would require the state to enable accessibility to all native speakers of the language." In an open letter, Reuven Rivlin, Israel's president, raised his concern over an earlier draft of the legislation, saying it "could harm the Jewish people worldwide and in Israel, and could even be used as a weapon by our enemies". To register his displeasure with the law, Rivlin, fulfilling his duty as president to sign all laws by the Knesset, signed his name in Arabic. Responding to Arab legislators who objected to the proposed basic law, MK Avi Dichter, said that, "The most you can do is to live among us as a national minority that enjoys equal individual rights, but not equality as a national minority." In an interview with Haaretz, Tourism Minister Yariv Levin, who supervised the passage of the law, said that, "Through the law, we can prevent family reunification [of Israeli citizens and Palestinians] not only out of security motives, but also motivated to maintain the character of the country as the national homeland of the Jewish people." He also insisted to reject the inclusion of equality in the legislation to avoid undermining the Law of Return. Reaction Retired Israeli Chief Justice Aharon Barak, who led the "constitutional revolution" that established judicial review in the 1990s, said that "This is an important law". Barak drew a distinction between national and civic rights: "The recognition of the minority rights of Israel's Arab citizens does not grant them a national right to self-determination within the State of Israel. They are a minority whose identity and culture must be protected, but if they want to realize their right to national self-determination, they can only do it in a state of their own, not in Israel." He also accepted the argument that the right to equality does not belong in this law, but insisted that it be made explicit (rather than just implied) in Basic Law: Human Dignity and Liberty. Heads of Israel's Druze community petitioned the Israeli Supreme Court in protest against the law, and 100 Druze reservists complained that though having fought in Israel's wars for generations, the bill relegated them to second-class status. According to Rami Zeedan, who is himself an Israeli Druze, the main problem in the law in the eyes of the Israeli Druze is ignoring the definition of "Israeli" as the nation of the state, while the Druze hold this as integral part of their social self- identification. The Assembly of Catholic Ordinaries of the Holy Land asked the government to rescind the law. When the law passed, Israeli Arab parliamentary members of the Joint List ripped up copies of the bill and shouted, “Apartheid,” on the floor of the Knesset. Ayman Odeh, the then leader of a coalition of primarily Arab parties in opposition, said in a statement that Israel had “passed a law of Jewish supremacy and told us that we will always be second-class citizens”. Mass protests were held in Tel Aviv following the law, which critics labelled as racist towards the country's Arabs. In particular, many Arabs were angered by the law's downgrading of Arabic from an official language to one with an ambiguous "special status". Palestinians, liberal American Jews, and many Israelis on the left denounced the law as racist and undemocratic, with Yohanan Plesner, the head of the non-partisan Israel Democracy Institute, calling the new law “jingoistic and divisive” and an “unnecessary embarrassment to Israel”. Likud MK Benny Begin, son of the party's co-founder Menachem Begin, expressed his concern about the direction of his party; in his opinion, it is moving a little further away from human rights. The Adalah Legal Center for Arab Minority Rights in Israel said that the law "contains key elements of apartheid", which is not only immoral, but absolutely prohibited under international law". Adalah Director Hassan Jabareen said that the law would make Israel an exclusively Jewish country, which "made discrimination a constitutional value and made its attachment to favouring Jewish supremacy the reason for its institutions". Shimon Stein and Moshe Zimmermann commented that the new law calls into question the equality of Arabs living in Israel concerning the loss of Arabic's status as an official language, also claiming that "only" the country's Jewish settlements and Jewish immigration are considered fundamental values. They claimed that the law, beginning with the clause: "The land of Israel is the historical homeland of the Jewish people, in which the State of Israel was established", and lacking any mention of any other people within the land or of borders, opens up a loophole for annexation of the West Bank and a goodbye to the two-state solution and democracy. Eugene Kontorovich published an article on the proposed law in which he compared it to the situation in many European nation-states, and found that seven member states of the European Union "have constitutional 'nationhood' provisions, which typically speak of the state as being the national home and locus of self-determination for the country's majority ethnic group". He supported this claim with two detailed examples, Latvia and Slovakia, stating that in the light of this, the proposed bill in Israel had "nothing racist, or even unusual, about having national or religious character reflected in constitutional commitments" and concluded that "it is hard to understand why what works for them should be so widely denounced when it comes to Israel." Ayman Odeh, head of the Joint List party, condemned the law, seeing it as "the death of democracy". Israeli Prime Minister Netanyahu responded that the civil rights of every Israeli citizen is guaranteed in a series of Knesset laws, including Basic Law: Human Dignity and Liberty, but the national rights of the Jewish people in Israel had not been enshrined by law until now. In response to complaints from the Druze community, Netanyahu stated in a subsequent cabinet meeting: "In contrast to the outrageous comments from the left attacking the Jewish state, I was touched by the sentiments of our brothers and sisters in the Druze community", and committed to meeting with Druze leaders to find solutions to their concerns. Initial meetings with Druze leaders fell apart, however, when Netanyahu walked out after one Druze leader refused Netanyahu's demand that he take back his use of the term "apartheid" to refer to the law on social media. Some Druze participants suggested that Netanyahu had deliberately torpedoed the meeting when he saw that they would not endorse cosmetic changes to the law. A poll conducted by Panel Politics found that 58% of Israeli Jews support the law, 34% are against and 8% have no opinion (among 532 responses). The poll found more support among people who define themselves as right-wing or centrist, while leftists are more likely to oppose it. A survey, conducted by the Israeli Democracy Institute and based on the replies of 600 Israelis, showed that the majority of the public, 59.6% of Jews and 72.5% of Arabs, believe that equality for all Israeli citizens should have been also covered by the law. In response to the presence of Palestinian flags during a protest against the law in Tel Aviv, Netanyahu said: "There is no greater testament to the necessity of this law. We will continue to wave the Israeli flag and sing Hatikvah with great pride." The secretary-general of the Palestine Liberation Organization, Saeb Erekat, described it as a "dangerous and racist law" which "officially legalizes apartheid and legally defines Israel as an apartheid system". Backlash abroad has shown disapproval of the law by Jewish groups, with the American Jewish Committee stating the law "put at risk the commitment of Israel's founders to build a country that is both Jewish and democratic". Additionally, Jonathan Greenblatt, CEO of the Anti-Defamation League (ADL), said: "While there are provisions that we agree with—notably with regard to state symbols like the anthem, flag, and capital Jerusalem; as well as in re-affirming that the State of Israel is open to Jewish immigration—we are troubled by the fact that the law, which celebrates the fundamental Jewish nature of the state, raises significant questions about the government's long-term commitment to its pluralistic identity and democratic nature." The European Union expressed concern over the passing of the law, saying it would "complicate a two-state solution to the Israel-Palestinian conflict". Turkish President Recep Tayyip Erdoğan, while addressing Grand National Assembly MPs in Ankara, said that the "spirit of Hitler" lives on in Israel, commenting specifically that he believes "no difference [exists] between Hitler's obsession with a pure race and the understanding that these ancient lands are just for the Jews." He also called Israel "the world's most Zionist, fascist, racist state." The statements were condemned by Israeli prime minister Benjamin Netanyahu, who described Erdoğan's rule as "a dark dictatorship" and stated that Erdoğan "is massacring Syrians and Kurds and has jailed tens of thousands of his own citizens." In addition, Israel considers comparisons of its government with the Nazi regime as an egregious insult. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Black_hole#cite_note-curiel19-72] | [TOKENS: 13839]
Contents Black hole A black hole is an astronomical body so compact that its gravity prevents anything, including light, from escaping. Albert Einstein's theory of general relativity predicts that a sufficiently compact mass will form a black hole. The boundary of no escape is called the event horizon. In general relativity, a black hole's event horizon seals an object's fate but produces no locally detectable change when crossed. General relativity also predicts that every black hole should have a central singularity, where the curvature of spacetime is infinite. In many ways, a black hole acts like an ideal black body, as it reflects no light. Quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is of the order of billionths of a kelvin for stellar black holes, making it essentially impossible to observe directly. Objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. In 1916, Karl Schwarzschild found the first modern solution of general relativity that would characterise a black hole. Due to his influential research, the Schwarzschild metric is named after him. David Finkelstein, in 1958, first interpreted Schwarzschild's model as a region of space from which nothing can escape. Black holes were long considered a mathematical curiosity; it was not until the 1960s that theoretical work showed they were a generic prediction of general relativity. The first black hole known was Cygnus X-1, identified by several researchers independently in 1971. Black holes typically form when massive stars collapse at the end of their life cycle. After a black hole has formed, it can grow by absorbing mass from its surroundings. Supermassive black holes of millions of solar masses may form by absorbing other stars and merging with other black holes, or via direct collapse of gas clouds. There is consensus that supermassive black holes exist in the centres of most galaxies. The presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as visible light. Matter falling toward a black hole can form an accretion disk of infalling plasma, heated by friction and emitting light. In extreme cases, this creates a quasar, some of the brightest objects in the universe. Merging black holes can also be detected by observation of the gravitational waves they emit. If other stars are orbiting a black hole, their orbits can be used to determine the black hole's mass and location. Such observations can be used to exclude possible alternatives such as neutron stars. In this way, astronomers have identified numerous stellar black hole candidates in binary systems and established that the radio source known as Sagittarius A*, at the core of the Milky Way galaxy, contains a supermassive black hole of about 4.3 million solar masses. History The idea of a body so massive that even light could not escape was first proposed in the late 18th century by English astronomer and clergyman John Michell and independently by French scientist Pierre-Simon Laplace. Both scholars proposed very large stars in contrast to the modern concept of an extremely dense object. Michell's idea, in a short part of a letter published in 1784, calculated that a star with the same density but 500 times the radius of the sun would not let any emitted light escape; the surface escape velocity would exceed the speed of light.: 122 Michell correctly hypothesized that such supermassive but non-radiating bodies might be detectable through their gravitational effects on nearby visible bodies. In 1796, Laplace mentioned that a star could be invisible if it were sufficiently large while speculating on the origin of the Solar System in his book Exposition du Système du Monde. Franz Xaver von Zach asked Laplace for a mathematical analysis, which Laplace provided and published in a journal edited by von Zach. In 1905, Albert Einstein showed that the laws of electromagnetism would be invariant under a Lorentz transformation: they would be identical for observers travelling at different velocities relative to each other. This discovery became known as the principle of special relativity. Although the laws of mechanics had already been shown to be invariant, gravity remained yet to be included.: 19 In 1907, Einstein published a paper proposing his equivalence principle, the hypothesis that inertial mass and gravitational mass have a common cause. Using the principle, Einstein predicted the redshift and half of the lensing effect of gravity on light; the full prediction of gravitational lensing required development of general relativity.: 19 By 1915, Einstein refined these ideas into his general theory of relativity, which explained how matter affects spacetime, which in turn affects the motion of other matter. This formed the basis for black hole physics. Only a few months after Einstein published the field equations describing general relativity, astrophysicist Karl Schwarzschild set out to apply the idea to stars. He assumed spherical symmetry with no spin and found a solution to Einstein's equations.: 124 A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the same solution. At a certain radius from the center of the mass, the Schwarzschild solution became singular, meaning that some of the terms in the Einstein equations became infinite. The nature of this radius, which later became known as the Schwarzschild radius, was not understood at the time. Many physicists of the early 20th century were skeptical of the existence of black holes. In a 1926 popular science book, Arthur Eddington critiqued the idea of a star with mass compressed to its Schwarzschild radius as a flaw in the then-poorly-understood theory of general relativity.: 134 In 1939, Einstein himself used his theory of general relativity in an attempt to prove that black holes were impossible. His work relied on increasing pressure or increasing centrifugal force balancing the force of gravity so that the object would not collapse beyond its Schwarzschild radius. He missed the possibility that implosion would drive the system below this critical value.: 135 By the 1920s, astronomers had classified a number of white dwarf stars as too cool and dense to be explained by the gradual cooling of ordinary stars. In 1926, Ralph Fowler showed that quantum-mechanical degeneracy pressure was larger than thermal pressure at these densities.: 145 In 1931, Subrahmanyan Chandrasekhar calculated that a non-rotating body of electron-degenerate matter below a certain limiting mass is stable, and by 1934 he showed that this explained the catalog of white dwarf stars.: 151 When Chandrasekhar announced his results, Eddington pointed out that stars above this limit would radiate until they were sufficiently dense to prevent light from exiting, a conclusion he considered absurd. Eddington and, later, Lev Landau argued that some yet unknown mechanism would stop the collapse. In the 1930s, Fritz Zwicky and Walter Baade studied stellar novae, focusing on exceptionally bright ones they called supernovae. Zwicky promoted the idea that supernovae produced stars with the density of atomic nuclei—neutron stars—but this idea was largely ignored.: 171 In 1939, based on Chandrasekhar's reasoning, J. Robert Oppenheimer and George Volkoff predicted that neutron stars below a certain mass limit, later called the Tolman–Oppenheimer–Volkoff limit, would be stable due to neutron degeneracy pressure. Above that limit, they reasoned that either their model would not apply or that gravitational contraction would not stop.: 380 John Archibald Wheeler and two of his students resolved questions about the model behind the Tolman–Oppenheimer–Volkoff (TOV) limit. Harrison and Wheeler developed the equations of state relating density to pressure for cold matter all the way through electron degeneracy and neutron degeneracy. Masami Wakano and Wheeler then used the equations to compute the equilibrium curve for stars, relating mass to circumference. They found no additional features that would invalidate the TOV limit. This meant that the only thing that could prevent black holes from forming was a dynamic process ejecting sufficient mass from a star as it cooled.: 205 The modern concept of black holes was formulated by Robert Oppenheimer and his student Hartland Snyder in 1939.: 80 In the paper, Oppenheimer and Snyder solved Einstein's equations of general relativity for an idealized imploding star, in a model later called the Oppenheimer–Snyder model, then described the results from far outside the star. The implosion starts as one might expect: the star material rapidly collapses inward. However, as the density of the star increases, gravitational time dilation increases and the collapse, viewed from afar, seems to slow down further and further until the star reaches its Schwarzschild radius, where it appears frozen in time.: 217 In 1958, David Finkelstein identified the Schwarzschild surface as an event horizon, calling it "a perfect unidirectional membrane: causal influences can cross it in only one direction". In this sense, events that occur inside of the black hole cannot affect events that occur outside of the black hole. Finkelstein created a new reference frame to include the point of view of infalling observers.: 103 Finkelstein's new frame of reference allowed events at the surface of an imploding star to be related to events far away. By 1962 the two points of view were reconciled, convincing many skeptics that implosion into a black hole made physical sense.: 226 The era from the mid-1960s to the mid-1970s was the "golden age of black hole research", when general relativity and black holes became mainstream subjects of research.: 258 In this period, more general black hole solutions were found. In 1963, Roy Kerr found the exact solution for a rotating black hole. Two years later, Ezra Newman found the cylindrically symmetric solution for a black hole that is both rotating and electrically charged. In 1967, Werner Israel found that the Schwarzschild solution was the only possible solution for a nonspinning, uncharged black hole, meaning that a Schwarzschild black hole would be defined by its mass alone. Similar identities were later found for Reissner-Nordstrom and Kerr black holes, defined only by their mass and their charge or spin respectively. Together, these findings became known as the no-hair theorem, which states that a stationary black hole is completely described by the three parameters of the Kerr–Newman metric: mass, angular momentum, and electric charge. At first, it was suspected that the strange mathematical singularities found in each of the black hole solutions only appeared due to the assumption that a black hole would be perfectly spherically symmetric, and therefore the singularities would not appear in generic situations where black holes would not necessarily be symmetric. This view was held in particular by Vladimir Belinski, Isaak Khalatnikov, and Evgeny Lifshitz, who tried to prove that no singularities appear in generic solutions, although they would later reverse their positions. However, in 1965, Roger Penrose proved that general relativity without quantum mechanics requires that singularities appear in all black holes. Astronomical observations also made great strides during this era. In 1967, Antony Hewish and Jocelyn Bell Burnell discovered pulsars and by 1969, these were shown to be rapidly rotating neutron stars. Until that time, neutron stars, like black holes, were regarded as just theoretical curiosities, but the discovery of pulsars showed their physical relevance and spurred a further interest in all types of compact objects that might be formed by gravitational collapse. Based on observations in Greenwich and Toronto in the early 1970s, Cygnus X-1, a galactic X-ray source discovered in 1964, became the first astronomical object commonly accepted to be a black hole. Work by James Bardeen, Jacob Bekenstein, Carter, and Hawking in the early 1970s led to the formulation of black hole thermodynamics. These laws describe the behaviour of a black hole in close analogy to the laws of thermodynamics by relating mass to energy, area to entropy, and surface gravity to temperature. The analogy was completed: 442 when Hawking, in 1974, showed that quantum field theory implies that black holes should radiate like a black body with a temperature proportional to the surface gravity of the black hole, predicting the effect now known as Hawking radiation. While Cygnus X-1, a stellar-mass black hole, was generally accepted by the scientific community as a black hole by the end of 1973, it would be decades before a supermassive black hole would gain the same broad recognition. Although, as early as the 1960s, physicists such as Donald Lynden-Bell and Martin Rees had suggested that powerful quasars in the center of galaxies were powered by accreting supermassive black holes, little observational proof existed at the time. However, the Hubble Space Telescope, launched decades later, found that supermassive black holes were not only present in these active galactic nuclei, but that supermassive black holes in the center of galaxies were ubiquitous: Almost every galaxy had a supermassive black hole at its center, many of which were quiescent. In 1999, David Merritt proposed the M–sigma relation, which related the dispersion of the velocity of matter in the center bulge of a galaxy to the mass of the supermassive black hole at its core. Subsequent studies confirmed this correlation. Around the same time, based on telescope observations of the velocities of stars at the center of the Milky Way galaxy, independent work groups led by Andrea Ghez and Reinhard Genzel concluded that the compact radio source in the center of the galaxy, Sagittarius A*, was likely a supermassive black hole. On 11 February 2016, the LIGO Scientific Collaboration and Virgo Collaboration announced the first direct detection of gravitational waves, named GW150914, representing the first observation of a black hole merger. At the time of the merger, the black holes were approximately 1.4 billion light-years away from Earth and had masses of 30 and 35 solar masses.: 6 In 2017, Rainer Weiss, Kip Thorne, and Barry Barish, who had spearheaded the project, were awarded the Nobel Prize in Physics for their work. Since the initial discovery in 2015, hundreds more gravitational waves have been observed by LIGO and another interferometer, Virgo. On 10 April 2019, the first direct image of a black hole and its vicinity was published, following observations made by the Event Horizon Telescope (EHT) in 2017 of the supermassive black hole in Messier 87's galactic centre. In 2022, the Event Horizon Telescope collaboration released an image of the black hole in the center of the Milky Way galaxy, Sagittarius A*; The data had been collected in 2017. In 2020, the Nobel Prize in Physics was awarded for work on black holes. Andrea Ghez and Reinhard Genzel shared one-half for their discovery that Sagittarius A* is a supermassive black hole. Penrose received the other half for his work showing that the mathematics of general relativity requires the formation of black holes. Cosmologists lamented that Hawking's extensive theoretical work on black holes would not be honored since he died in 2018. In December 1967, a student reportedly suggested the phrase black hole at a lecture by John Wheeler; Wheeler adopted the term for its brevity and "advertising value", and Wheeler's stature in the field ensured it quickly caught on, leading some to credit Wheeler with coining the phrase. However, the term was used by others around that time. Science writer Marcia Bartusiak traces the term black hole to physicist Robert H. Dicke, who in the early 1960s reportedly compared the phenomenon to the Black Hole of Calcutta, notorious as a prison where people entered but never left alive. The term was used in print by Life and Science News magazines in 1963, and by science journalist Ann Ewing in her article "'Black Holes' in Space", dated 18 January 1964, which was a report on a meeting of the American Association for the Advancement of Science held in Cleveland, Ohio. Definition A black hole is generally defined as a region of spacetime from which no information-carrying signals or objects can escape. However, verifying an object as a black hole by this definition would require waiting for an infinite time and at an infinite distance from the black hole to verify that indeed, nothing has escaped, and thus cannot be used to identify a physical black hole. Broadly, physicists do not have a precisely-agreed-upon definition of a black hole. Among astrophysicists, a black hole is a compact object with a mass larger than four solar masses. A black hole may also be defined as a reservoir of information: 142 or a region where space is falling inwards faster than the speed of light. Properties The no-hair theorem postulates that, once it achieves a stable condition after formation, a black hole has only three independent physical properties: mass, electric charge, and angular momentum; the black hole is otherwise featureless. If the conjecture is true, any two black holes that share the same values for these properties, or parameters, are indistinguishable from one another. The degree to which the conjecture is true for real black holes is currently an unsolved problem. The simplest static black holes have mass but neither electric charge nor angular momentum. According to Birkhoff's theorem, these Schwarzschild black holes are the only vacuum solution that is spherically symmetric. Solutions describing more general black holes also exist. Non-rotating charged black holes are described by the Reissner–Nordström metric, while the Kerr metric describes a non-charged rotating black hole. The most general stationary black hole solution known is the Kerr–Newman metric, which describes a black hole with both charge and angular momentum. The simplest static black holes have mass but neither electric charge nor angular momentum. Contrary to the popular notion of a black hole "sucking in everything" in its surroundings, from far away, the external gravitational field of a black hole is identical to that of any other body of the same mass. While a black hole can theoretically have any positive mass, the charge and angular momentum are constrained by the mass. The total electric charge Q and the total angular momentum J are expected to satisfy the inequality Q 2 4 π ϵ 0 + c 2 J 2 G M 2 ≤ G M 2 {\displaystyle {\frac {Q^{2}}{4\pi \epsilon _{0}}}+{\frac {c^{2}J^{2}}{GM^{2}}}\leq GM^{2}} for a black hole of mass M. Black holes with the maximum possible charge or spin satisfying this inequality are called extremal black holes. Solutions of Einstein's equations that violate this inequality exist, but they do not possess an event horizon. These are so-called naked singularities that can be observed from the outside. Because these singularities make the universe inherently unpredictable, many physicists believe they could not exist. The weak cosmic censorship hypothesis, proposed by Sir Roger Penrose, rules out the formation of such singularities, when they are created through the gravitational collapse of realistic matter. However, this theory has not yet been proven, and some physicists believe that naked singularities could exist. It is also unknown whether black holes could even become extremal, forming naked singularities, since natural processes counteract increasing spin and charge when a black hole becomes near-extremal. The total mass of a black hole can be estimated by analyzing the motion of objects near the black hole, such as stars or gas. All black holes spin, often fast—One supermassive black hole, GRS 1915+105 has been estimated to spin at over 1,000 revolutions per second. The Milky Way's central black hole Sagittarius A* rotates at about 90% of the maximum rate. The spin rate can be inferred from measurements of atomic spectral lines in the X-ray range. As gas near the black hole plunges inward, high energy X-ray emission from electron-positron pairs illuminates the gas further out, appearing red-shifted due to relativistic effects. Depending on the spin of the black hole, this plunge happens at different radii from the hole, with different degrees of redshift. Astronomers can use the gap between the x-ray emission of the outer disk and the redshifted emission from plunging material to determine the spin of the black hole. A newer way to estimate spin is based on the temperature of gasses accreting onto the black hole. The method requires an independent measurement of the black hole mass and inclination angle of the accretion disk followed by computer modeling. Gravitational waves from coalescing binary black holes can also provide the spin of both progenitor black holes and the merged hole, but such events are rare. A spinning black hole has angular momentum. The supermassive black hole in the center of the Messier 87 (M87) galaxy appears to have an angular momentum very close to the maximum theoretical value. That uncharged limit is J ≤ G M 2 c , {\displaystyle J\leq {\frac {GM^{2}}{c}},} allowing definition of a dimensionless spin magnitude such that 0 ≤ c J G M 2 ≤ 1. {\displaystyle 0\leq {\frac {cJ}{GM^{2}}}\leq 1.} Most black holes are believed to have an approximately neutral charge. For example, Michal Zajaček, Arman Tursunov, Andreas Eckart, and Silke Britzen found the electric charge of Sagittarius A* to be at least ten orders of magnitude below the theoretical maximum. A charged black hole repels other like charges just like any other charged object. If a black hole were to become charged, particles with an opposite sign of charge would be pulled in by the extra electromagnetic force, while particles with the same sign of charge would be repelled, neutralizing the black hole. This effect may not be as strong if the black hole is also spinning. The presence of charge can reduce the diameter of the black hole by up to 38%. The charge Q for a nonspinning black hole is bounded by Q ≤ G M , {\displaystyle Q\leq {\sqrt {G}}M,} where G is the gravitational constant and M is the black hole's mass. Classification Black holes can have a wide range of masses. The minimum mass of a black hole formed by stellar gravitational collapse is governed by the maximum mass of a neutron star and is believed to be approximately two-to-four solar masses. However, theoretical primordial black holes, believed to have formed soon after the Big Bang, could be far smaller, with masses as little as 10−5 grams at formation. These very small black holes are sometimes called micro black holes. Black holes formed by stellar collapse are called stellar black holes. Estimates of their maximum mass at formation vary, but generally range from 10 to 100 solar masses, with higher estimates for black holes progenated by low-metallicity stars. The mass of a black hole formed via a supernova has a lower bound: If the progenitor star is too small, the collapse may be stopped by the degeneracy pressure of the star's constituents, allowing the condensation of matter into an exotic denser state. Degeneracy pressure occurs from the Pauli exclusion principle—Particles will resist being in the same place as each other. Smaller progenitor stars, with masses less than about 8 M☉, will be held together by the degeneracy pressure of electrons and will become a white dwarf. For more massive progenitor stars, electron degeneracy pressure is no longer strong enough to resist the force of gravity and the star will be held together by neutron degeneracy pressure, which can occur at much higher densities, forming a neutron star. If the star is still too massive, even neutron degeneracy pressure will not be able to resist the force of gravity and the star will collapse into a black hole.: 5.8 Stellar black holes can also gain mass via accretion of nearby matter, often from a companion object such as a star. Black holes that are larger than stellar black holes but smaller than supermassive black holes are called intermediate-mass black holes, with masses of approximately 102 to 105 solar masses. These black holes seem to be rarer than their stellar and supermassive counterparts, with relatively few candidates having been observed. Physicists have speculated that such black holes may form from collisions in globular and star clusters or at the center of low-mass galaxies. They may also form as the result of mergers of smaller black holes, with several LIGO observations finding merged black holes within the 110-350 solar mass range. The black holes with the largest masses are called supermassive black holes, with masses more than 106 times that of the Sun. These black holes are believed to exist at the centers of almost every large galaxy, including the Milky Way. Some scientists have proposed a subcategory of even larger black holes, called ultramassive black holes, with masses greater than 109-1010 solar masses. Theoretical models predict that the accretion disc that feeds black holes will be unstable once a black hole reaches 50-100 billion times the mass of the Sun, setting a rough upper limit to black hole mass. Structure While black holes are conceptually invisible sinks of all matter and light, in astronomical settings, their enormous gravity alters the motion of surrounding objects and pulls nearby gas inwards at near-light speed, making the area around black holes the brightest objects in the universe. Some black holes have relativistic jets—thin streams of plasma travelling away from the black hole at more than one-tenth of the speed of light. A small faction of the matter falling towards the black hole gets accelerated away along the hole rotation axis. These jets can extend as far as millions of parsecs from the black hole itself. Black holes of any mass can have jets. However, they are typically observed around spinning black holes with strongly-magnetized accretion disks. Relativistic jets were more common in the early universe, when galaxies and their corresponding supermassive black holes were rapidly gaining mass. All black holes with jets also have an accretion disk, but the jets are usually brighter than the disk. Quasars, typically found in other galaxies, are believed to be supermassive black holes with jets; microquasars are believed to be stellar-mass objects with jets, typically observed in the Milky Way. The mechanism of formation of jets is not yet known, but several options have been proposed. One method proposed to fuel these jets is the Blandford-Znajek process, which suggests that the dragging of magnetic field lines by a black hole's rotation could launch jets of matter into space. The Penrose process, which involves extraction of a black hole's rotational energy, has also been proposed as a potential mechanism of jet propulsion. Due to conservation of angular momentum, gas falling into the gravitational well created by a massive object will typically form a disk-like structure around the object.: 242 As the disk's angular momentum is transferred outward due to internal processes, its matter falls farther inward, converting its gravitational energy into heat and releasing a large flux of x-rays. The temperature of these disks can range from thousands to millions of Kelvin, and temperatures can differ throughout a single accretion disk. Accretion disks can also emit in other parts of the electromagnetic spectrum, depending on the disk's turbulence and magnetization and the black hole's mass and angular momentum. Accretion disks can be defined as geometrically thin or geometrically thick. Geometrically thin disks are mostly confined to the black hole's equatorial plane and have a well-defined edge at the innermost stable circular orbit (ISCO), while geometrically thick disks are supported by internal pressure and temperature and can extend inside the ISCO. Disks with high rates of electron scattering and absorption, appearing bright and opaque, are called optically thick; optically thin disks are more translucent and produce fainter images when viewed from afar. Accretion disks of black holes accreting beyond the Eddington limit are often referred to as polish donuts due to their thick, toroidal shape that resembles that of a donut. Quasar accretion disks are expected to usually appear blue in color. The disk for a stellar black hole, on the other hand, would likely look orange, yellow, or red, with its inner regions being the brightest. Theoretical research suggests that the hotter a disk is, the bluer it should be, although this is not always supported by observations of real astronomical objects. Accretion disk colors may also be altered by the Doppler effect, with the part of the disk travelling towards an observer appearing bluer and brighter and the part of the disk travelling away from the observer appearing redder and dimmer. In Newtonian gravity, test particles can stably orbit at arbitrary distances from a central object. In general relativity, however, there exists a smallest possible radius for which a massive particle can orbit stably. Any infinitesimal inward perturbations to this orbit will lead to the particle spiraling into the black hole, and any outward perturbations will, depending on the energy, cause the particle to spiral in, move to a stable orbit further from the black hole, or escape to infinity. This orbit is called the innermost stable circular orbit, or ISCO. The location of the ISCO depends on the spin of the black hole and the spin of the particle itself. In the case of a Schwarzschild black hole (spin zero) and a particle without spin, the location of the ISCO is: r I S C O = 3 r s = 6 G M c 2 , {\displaystyle r_{\rm {ISCO}}=3\,r_{\text{s}}={\frac {6\,GM}{c^{2}}},} where r I S C O {\displaystyle r_{\rm {_{ISCO}}}} is the radius of the ISCO, r s {\displaystyle r_{\text{s}}} is the Schwarzschild radius of the black hole, G {\displaystyle G} is the gravitational constant, and c {\displaystyle c} is the speed of light. The radius of this orbit changes slightly based on particle spin. For charged black holes, the ISCO moves inwards. For spinning black holes, the ISCO is moved inwards for particles orbiting in the same direction that the black hole is spinning (prograde) and outwards for particles orbiting in the opposite direction (retrograde). For example, the ISCO for a particle orbiting retrograde can be as far out as about 9 r s {\displaystyle 9r_{\text{s}}} , while the ISCO for a particle orbiting prograde can be as close as at the event horizon itself. The photon sphere is a spherical boundary for which photons moving on tangents to that sphere are bent completely around the black hole, possibly orbiting multiple times. Light rays with impact parameters less than the radius of the photon sphere enter the black hole. For Schwarzschild black holes, the photon sphere has a radius 1.5 times the Schwarzschild radius; the radius for non-Schwarzschild black holes is at least 1.5 times the radius of the event horizon. When viewed from a great distance, the photon sphere creates an observable black hole shadow. Since no light emerges from within the black hole, this shadow is the limit for possible observations.: 152 The shadow of colliding black holes should have characteristic warped shapes, allowing scientists to detect black holes that are about to merge. While light can still escape from the photon sphere, any light that crosses the photon sphere on an inbound trajectory will be captured by the black hole. Therefore, any light that reaches an outside observer from the photon sphere must have been emitted by objects between the photon sphere and the event horizon. Light emitted towards the photon sphere may also curve around the black hole and return to the emitter. For a rotating, uncharged black hole, the radius of the photon sphere depends on the spin parameter and whether the photon is orbiting prograde or retrograde. For a photon orbiting prograde, the photon sphere will be 1-3 Schwarzschild radii from the center of the black hole, while for a photon orbiting retrograde, the photon sphere will be between 3-5 Schwarzschild radii from the center of the black hole. The exact location of the photon sphere depends on the magnitude of the black hole's rotation. For a charged, nonrotating black hole, there will only be one photon sphere, and the radius of the photon sphere will decrease for increasing black hole charge. For non-extremal, charged, rotating black holes, there will always be two photon spheres, with the exact radii depending on the parameters of the black hole. Near a rotating black hole, spacetime rotates similar to a vortex. The rotating spacetime will drag any matter and light into rotation around the spinning black hole. This effect of general relativity, called frame dragging, gets stronger closer to the spinning mass. The region of spacetime in which it is impossible to stay still is called the ergosphere. The ergosphere of a black hole is a volume bounded by the black hole's event horizon and the ergosurface, which coincides with the event horizon at the poles but bulges out from it around the equator. Matter and radiation can escape from the ergosphere. Through the Penrose process, objects can emerge from the ergosphere with more energy than they entered with. The extra energy is taken from the rotational energy of the black hole, slowing down the rotation of the black hole.: 268 A variation of the Penrose process in the presence of strong magnetic fields, the Blandford–Znajek process, is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei. The observable region of spacetime around a black hole closest to its event horizon is called the plunging region. In this area it is no longer possible for free falling matter to follow circular orbits or stop a final descent into the black hole. Instead, it will rapidly plunge toward the black hole at close to the speed of light, growing increasingly hot and producing a characteristic, detectable thermal emission. However, light and radiation emitted from this region can still escape from the black hole's gravitational pull. For a nonspinning, uncharged black hole, the radius of the event horizon, or Schwarzschild radius, is proportional to the mass, M, through r s = 2 G M c 2 ≈ 2.95 M M ⊙ k m , {\displaystyle r_{\mathrm {s} }={\frac {2GM}{c^{2}}}\approx 2.95\,{\frac {M}{M_{\odot }}}~\mathrm {km,} } where rs is the Schwarzschild radius and M☉ is the mass of the Sun.: 124 For a black hole with nonzero spin or electric charge, the radius is smaller,[Note 1] until an extremal black hole could have an event horizon close to r + = G M c 2 , {\displaystyle r_{\mathrm {+} }={\frac {GM}{c^{2}}},} half the radius of a nonspinning, uncharged black hole of the same mass. Since the volume within the Schwarzschild radius increase with the cube of the radius, average density of a black hole inside its Schwarzschild radius is inversely proportional to the square of its mass: supermassive black holes are much less dense than stellar black holes. The average density of a 108 M☉ black hole is comparable to that of water. The defining feature of a black hole is the existence of an event horizon, a boundary in spacetime through which matter and light can pass only inward towards the center of the black hole. Nothing, not even light, can escape from inside the event horizon. The event horizon is referred to as such because if an event occurs within the boundary, information from that event cannot reach or affect an outside observer, making it impossible to determine whether such an event occurred.: 179 For non-rotating black holes, the geometry of the event horizon is precisely spherical, while for rotating black holes, the event horizon is oblate. To a distant observer, a clock near a black hole would appear to tick more slowly than one further from the black hole.: 217 This effect, known as gravitational time dilation, would also cause an object falling into a black hole to appear to slow as it approached the event horizon, never quite reaching the horizon from the perspective of an outside observer.: 218 All processes on this object would appear to slow down, and any light emitted by the object to appear redder and dimmer, an effect known as gravitational redshift. An object falling from half of a Schwarzschild radius above the event horizon would fade away until it could no longer be seen, disappearing from view within one hundredth of a second. It would also appear to flatten onto the black hole, joining all other material that had ever fallen into the hole. On the other hand, an observer falling into a black hole would not notice any of these effects as they cross the event horizon. Their own clocks appear to them to tick normally, and they cross the event horizon after a finite time without noting any singular behaviour. In general relativity, it is impossible to determine the location of the event horizon from local observations, due to Einstein's equivalence principle.: 222 Black holes that are rotating and/or charged have an inner horizon, often called the Cauchy horizon, inside of the black hole. The inner horizon is divided up into two segments: an ingoing section and an outgoing section. At the ingoing section of the Cauchy horizon, radiation and matter that fall into the black hole would build up at the horizon, causing the curvature of spacetime to go to infinity. This would cause an observer falling in to experience tidal forces. This phenomenon is often called mass inflation, since it is associated with a parameter dictating the black hole's internal mass growing exponentially, and the buildup of tidal forces is called the mass-inflation singularity or Cauchy horizon singularity. Some physicists have argued that in realistic black holes, accretion and Hawking radiation would stop mass inflation from occurring. At the outgoing section of the inner horizon, infalling radiation would backscatter off of the black hole's spacetime curvature and travel outward, building up at the outgoing Cauchy horizon. This would cause an infalling observer to experience a gravitational shock wave and tidal forces as the spacetime curvature at the horizon grew to infinity. This buildup of tidal forces is called the shock singularity. Both of these singularities are weak, meaning that an object crossing them would only be deformed a finite amount by tidal forces, even though the spacetime curvature would still be infinite at the singularity. This is as opposed to a strong singularity, where an object hitting the singularity would be stretched and squeezed by an infinite amount. They are also null singularities, meaning that a photon could travel parallel to the them without ever being intercepted. Ignoring quantum effects, every black hole has a singularity inside, points where the curvature of spacetime becomes infinite, and geodesics terminate within a finite proper time.: 205 For a non-rotating black hole, this region takes the shape of a single point; for a rotating black hole it is smeared out to form a ring singularity that lies in the plane of rotation.: 264 In both cases, the singular region has zero volume. All of the mass of the black hole ends up in the singularity.: 252 Since the singularity has nonzero mass in an infinitely small space, it can be thought of as having infinite density. Observers falling into a Schwarzschild black hole (i.e., non-rotating and not charged) cannot avoid being carried into the singularity once they cross the event horizon. As they fall further into the black hole, they will be torn apart by the growing tidal forces in a process sometimes referred to as spaghettification or the noodle effect. Eventually, they will reach the singularity and be crushed into an infinitely small point.: 182 However any perturbations, such as those caused by matter or radiation falling in, would cause space to oscillate chaotically near the singularity. Any matter falling in would experience intense tidal forces rapidly changing in direction, all while being compressed into an increasingly small volume. Alternative forms of general relativity, including addition of some quatum effects, can lead to regular, or nonsingular, black holes without singularities. For example, the fuzzball model, based on string theory, states that black holes are actually made up of quantum microstates and need not have a singularity or an event horizon. The theory of loop quantum gravity proposes that the curvature and density at the center of a black hole is large, but not infinite. Formation Black holes are formed by gravitational collapse of massive stars, either by direct collapse or during a supernova explosion in a process called fallback. Black holes can result from the merger of two neutron stars or a neutron star and a black hole. Other more speculative mechanisms include primordial black holes created from density fluctuations in the early universe, the collapse of dark stars, a hypothetical object powered by annihilation of dark matter, or from hypothetical self-interacting dark matter. Gravitational collapse occurs when an object's internal pressure is insufficient to resist the object's own gravity. At the end of a star's life, it will run out of hydrogen to fuse, and will start fusing more and more massive elements, until it gets to iron. Since the fusion of elements heavier than iron would require more energy than it would release, nuclear fusion ceases. If the iron core of the star is too massive, the star will no longer be able to support itself and will undergo gravitational collapse. While most of the energy released during gravitational collapse is emitted very quickly, an outside observer does not actually see the end of this process. Even though the collapse takes a finite amount of time from the reference frame of infalling matter, a distant observer would see the infalling material slow and halt just above the event horizon, due to gravitational time dilation. Light from the collapsing material takes longer and longer to reach the observer, with the delay growing to infinity as the emitting material reaches the event horizon. Thus the external observer never sees the formation of the event horizon; instead, the collapsing material seems to become dimmer and increasingly red-shifted, eventually fading away. Observations of quasars at redshift z ∼ 7 {\displaystyle z\sim 7} , less than a billion years after the Big Bang, has led to investigations of other ways to form black holes. The accretion process to build supermassive black holes has a limiting rate of mass accumulation and a billion years is not enough time to reach quasar status. One suggestion is direct collapse of nearly pure hydrogen gas (low metalicity) clouds characteristic of the young universe, forming a supermassive star which collapses into a black hole. It has been suggested that seed black holes with typical masses of ~105 M☉ could have formed in this way which then could grow to ~109 M☉. However, the very large amount of gas required for direct collapse is not typically stable to fragmentation to form multiple stars. Thus another approach suggests massive star formation followed by collisions that seed massive black holes which ultimately merge to create a quasar.: 85 A neutron star in a common envelope with a regular star can accrete sufficient material to collapse to a black hole or two neutron stars can merge. These avenues for the formation of black holes are considered relatively rare. In the current epoch of the universe, conditions needed to form black holes are rare and are mostly only found in stars. However, in the early universe, conditions may have allowed for black hole formations via other means. Fluctuations of spacetime soon after the Big Bang may have formed areas that were denser then their surroundings. Initially, these regions would not have been compact enough to form a black hole, but eventually, the curvature of spacetime in the regions become large enough to cause them to collapse into a black hole. Different models for the early universe vary widely in their predictions of the scale of these fluctuations. Various models predict the creation of primordial black holes ranging from a Planck mass (~2.2×10−8 kg) to hundreds of thousands of solar masses. Primordial black holes with masses less than 1015 g would have evaporated by now due to Hawking radiation. Despite the early universe being extremely dense, it did not re-collapse into a black hole during the Big Bang, since the universe was expanding rapidly and did not have the gravitational differential necessary for black hole formation. Models for the gravitational collapse of objects of relatively constant size, such as stars, do not necessarily apply in the same way to rapidly expanding space such as the Big Bang. In principle, black holes could be formed in high-energy particle collisions that achieve sufficient density, although no such events have been detected. These hypothetical micro black holes, which could form from the collision of cosmic rays and Earth's atmosphere or in particle accelerators like the Large Hadron Collider, would not be able to aggregate additional mass. Instead, they would evaporate in about 10−25 seconds, posing no threat to the Earth. Evolution Black holes can also merge with other objects such as stars or even other black holes. This is thought to have been important, especially in the early growth of supermassive black holes, which could have formed from the aggregation of many smaller objects. The process has also been proposed as the origin of some intermediate-mass black holes. Mergers of supermassive black holes may take a long time: As a binary of supermassive black holes approach each other, most nearby stars are ejected, leaving little for the remaining black holes to gravitationally interact with that would allow them to get closer to each other. This phenomenon has been called the final parsec problem, as the distance at which this happens is usually around one parsec. When a black hole accretes matter, the gas in the inner accretion disk orbits at very high speeds because of its proximity to the black hole. The resulting friction heats the inner disk to temperatures at which it emits vast amounts of electromagnetic radiation (mainly X-rays) detectable by telescopes. By the time the matter of the disk reaches the ISCO, between 5.7% and 42% of its mass will have been converted to energy, depending on the black hole's spin. About 90% of this energy is released within about 20 black hole radii. In many cases, accretion disks are accompanied by relativistic jets that are emitted along the black hole's poles, which carry away much of the energy. The mechanism for the creation of these jets is currently not well understood, in part due to insufficient data. Many of the universe's most energetic phenomena have been attributed to the accretion of matter on black holes. Active galactic nuclei and quasars are believed to be the accretion disks of supermassive black holes. X-ray binaries are generally accepted to be binary systems in which one of the two objects is a compact object accreting matter from its companion. Ultraluminous X-ray sources may be the accretion disks of intermediate-mass black holes. At a certain rate of accretion, the outward radiation pressure will become as strong as the inward gravitational force, and the black hole should unable to accrete any faster. This limit is called the Eddington limit. However, many black holes accrete beyond this rate due to their non-spherical geometry or instabilities in the accretion disk. Accretion beyond the limit is called Super-Eddington accretion and may have been commonplace in the early universe. Stars have been observed to get torn apart by tidal forces in the immediate vicinity of supermassive black holes in galaxy nuclei, in what is known as a tidal disruption event (TDE). Some of the material from the disrupted star forms an accretion disk around the black hole, which emits observable electromagnetic radiation. The correlation between the masses of supermassive black holes in the centres of galaxies with the velocity dispersion and mass of stars in their host bulges suggests that the formation of galaxies and the formation of their central black holes are related. Black hole winds from rapid accretion, particularly when the galaxy itself is still accreting matter, can compress gas nearby, accelerating star formation. However, if the winds become too strong, the black hole may blow nearly all of the gas out of the galaxy, quenching star formation. Black hole jets may also energize nearby cavities of plasma and eject low-entropy gas from out of the galactic core, causing gas in galactic centers to be hotter than expected. If Hawking's theory of black hole radiation is correct, then black holes are expected to shrink and evaporate over time as they lose mass by the emission of photons and other particles. The temperature of this thermal spectrum (Hawking temperature) is proportional to the surface gravity of the black hole, which is inversely proportional to the mass. Hence, large black holes emit less radiation than small black holes.: Ch. 9.6 A stellar black hole of 1 M☉ has a Hawking temperature of 62 nanokelvins. This is far less than the 2.7 K temperature of the cosmic microwave background radiation. Stellar-mass or larger black holes receive more mass from the cosmic microwave background than they emit through Hawking radiation and thus will grow instead of shrinking. To have a Hawking temperature larger than 2.7 K (and be able to evaporate), a black hole would need a mass less than the Moon. Such a black hole would have a diameter of less than a tenth of a millimetre. The Hawking radiation for an astrophysical black hole is predicted to be very weak and would thus be exceedingly difficult to detect from Earth. A possible exception is the burst of gamma rays emitted in the last stage of the evaporation of primordial black holes. Searches for such flashes have proven unsuccessful and provide stringent limits on the possibility of existence of low mass primordial black holes, with modern research predicting that primordial black holes must make up less than a fraction of 10−7 of the universe's total mass. NASA's Fermi Gamma-ray Space Telescope, launched in 2008, has searched for these flashes, but has not yet found any. The properties of a black hole are constrained and interrelated by the theories that predict these properties. When based on general relativity, these relationships are called the laws of black hole mechanics. For a black hole that is not still forming or accreting matter, the zeroth law of black hole mechanics states the black hole's surface gravity is constant across the event horizon. The first law relates changes in the black hole's surface area, angular momentum, and charge to changes in its energy. The second law says the surface area of a black hole never decreases on its own. Finally, the third law says that the surface gravity of a black hole is never zero. These laws are mathematical analogs of the laws of thermodynamics. They are not equivalent, however, because, according to general relativity without quantum mechanics, a black hole can never emit radiation, and thus its temperature must always be zero.: 11 Quantum mechanics predicts that a black hole will continuously emit thermal Hawking radiation, and therefore must always have a nonzero temperature. It also predicts that all black holes have entropy which scales with their surface area. When quantum mechanics is accounted for, the laws of black hole mechanics become equivalent to the classical laws of thermodynamics. However, these conclusions are derived without a complete theory of quantum gravity, although many potential theories do predict black holes having entropy and temperature. Thus, the true quantum nature of black hole thermodynamics continues to be debated.: 29 Observational evidence Millions of black holes with around 30 solar masses derived from stellar collapse are expected to exist in the Milky Way. Even a dwarf galaxy like Draco should have hundreds. Only a few of these have been detected. By nature, black holes do not themselves emit any electromagnetic radiation other than the hypothetical Hawking radiation, so astrophysicists searching for black holes must generally rely on indirect observations. The defining characteristic of a black hole is its event horizon. The horizon itself cannot be imaged, so all other possible explanations for these indirect observations must be considered and eliminated before concluding that a black hole has been observed.: 11 The Event Horizon Telescope (EHT) is a global system of radio telescopes capable of directly observing a black hole shadow. The angular resolution of a telescope is based on its aperture and the wavelengths it is observing. Because the angular diameters of Sagittarius A* and Messier 87* in the sky are very small, a single telescope would need to be about the size of the Earth to clearly distinguish their horizons using radio wavelengths. By combining data from several different radio telescopes around the world, the Event Horizon Telescope creates an effective aperture the diameter size of the Earth. The EHT team used imaging algorithms to compute the most probable image from the data in its observations of Sagittarius A* and M87*. Gravitational-wave interferometry can be used to detect merging black holes and other compact objects. In this method, a laser beam is split down two long arms of a tunnel. The laser beams reflect off of mirrors in the tunnels and converge at the intersection of the arms, cancelling each other out. However, when a gravitational wave passes, it warps spacetime, changing the lengths of the arms themselves. Since each laser beam is now travelling a slightly different distance, they do not cancel out and produce a recognizable signal. Analysis of the signal can give scientists information about what caused the gravitational waves. Since gravitational waves are very weak, gravitational-wave observatories such as LIGO must have arms several kilometers long and carefully control for noise from Earth to be able to detect these gravitational waves. Since the first measurements in 2016, multiple gravitational waves from black holes have been detected and analyzed. The proper motions of stars near the centre of the Milky Way provide strong observational evidence that these stars are orbiting a supermassive black hole. Since 1995, astronomers have tracked the motions of 90 stars orbiting an invisible object coincident with the radio source Sagittarius A*. In 1998, by fitting the motions of the stars to Keplerian orbits, the astronomers were able to infer that Sagittarius A* must be a 2.6×106 M☉ object must be contained within a radius of 0.02 light-years. Since then, one of the stars—called S2—has completed a full orbit. From the orbital data, astronomers were able to refine the calculations of the mass of Sagittarius A* to 4.3×106 M☉, with a radius of less than 0.002 light-years. This upper limit radius is larger than the Schwarzschild radius for the estimated mass, so the combination does not prove Sagittarius A* is a black hole. Nevertheless, these observations strongly suggest that the central object is a supermassive black hole as there are no other plausible scenarios for confining so much invisible mass into such a small volume. Additionally, there is some observational evidence that this object might possess an event horizon, a feature unique to black holes. The Event Horizon Telescope image of Sagittarius A*, released in 2022, provided further confirmation that it is indeed a black hole. X-ray binaries are binary systems that emit a majority of their radiation in the X-ray part of the electromagnetic spectrum. These X-ray emissions result when a compact object accretes matter from an ordinary star. The presence of an ordinary star in such a system provides an opportunity for studying the central object and to determine if it might be a black hole. By measuring the orbital period of the binary, the distance to the binary from Earth, and the mass of the companion star, scientists can estimate the mass of the compact object. The Tolman-Oppenheimer-Volkoff limit (TOV limit) dictates the largest mass a nonrotating neutron star can be, and is estimated to be about two solar masses. While a rotating neutron star can be slightly more massive, if the compact object is much more massive than the TOV limit, it cannot be a neutron star and is generally expected to be a black hole. The first strong candidate for a black hole, Cygnus X-1, was discovered in this way by Charles Thomas Bolton, Louise Webster, and Paul Murdin in 1972. Observations of rotation broadening of the optical star reported in 1986 lead to a compact object mass estimate of 16 solar masses, with 7 solar masses as the lower bound. In 2011, this estimate was updated to 14.1±1.0 M☉ for the black hole and 19.2±1.9 M☉ for the optical stellar companion. X-ray binaries can be categorized as either low-mass or high-mass; This classification is based on the mass of the companion star, not the compact object itself. In a class of X-ray binaries called soft X-ray transients, the companion star is of relatively low mass, allowing for more accurate estimates of the black hole mass. These systems actively emit X-rays for only several months once every 10–50 years. During the period of low X-ray emission, called quiescence, the accretion disk is extremely faint, allowing detailed observation of the companion star. Numerous black hole candidates have been measured by this method. Black holes are also sometimes found in binaries with other compact objects, such as white dwarfs, neutron stars, and other black holes. The centre of nearly every galaxy contains a supermassive black hole. The close observational correlation between the mass of this hole and the velocity dispersion of the host galaxy's bulge, known as the M–sigma relation, strongly suggests a connection between the formation of the black hole and that of the galaxy itself. Astronomers use the term active galaxy to describe galaxies with unusual characteristics, such as unusual spectral line emission and very strong radio emission. Theoretical and observational studies have shown that the high levels of activity in the centers of these galaxies, regions called active galactic nuclei (AGN), may be explained by accretion onto supermassive black holes. These AGN consist of a central black hole that may be millions or billions of times more massive than the Sun, a disk of interstellar gas and dust called an accretion disk, and two jets perpendicular to the accretion disk. Although supermassive black holes are expected to be found in most AGN, only some galaxies' nuclei have been more carefully studied in attempts to both identify and measure the actual masses of the central supermassive black hole candidates. Some of the most notable galaxies with supermassive black hole candidates include the Andromeda Galaxy, Messier 32, Messier 87, the Sombrero Galaxy, and the Milky Way itself. Another way black holes can be detected is through observation of effects caused by their strong gravitational field. One such effect is gravitational lensing: The deformation of spacetime around a massive object causes light rays to be deflected, making objects behind them appear distorted. When the lensing object is a black hole, this effect can be strong enough to create multiple images of a star or other luminous source. However, the distance between the lensed images may be too small for contemporary telescopes to resolve—this phenomenon is called microlensing. Instead of seeing two images of a lensed star, astronomers see the star brighten slightly as the black hole moves towards the line of sight between the star and Earth and then return to its normal luminosity as the black hole moves away. The turn of the millennium saw the first 3 candidate detections of black holes in this way, and in January 2022, astronomers reported the first confirmed detection of a microlensing event from an isolated black hole. This was also the first determination of an isolated black hole mass, 7.1±1.3 M☉. Alternatives While there is a strong case for supermassive black holes, the model for stellar-mass black holes assumes of an upper limit for the mass of a neutron star: objects observed to have more mass are assumed to be black holes. However, the properties of extremely dense matter are poorly understood. New exotic phases of matter could allow other kinds of massive objects. Quark stars would be made up of quark matter and supported by quark degeneracy pressure, a form of degeneracy pressure even stronger than neutron degeneracy pressure. This would halt gravitational collapse at a higher mass than for a neutron star. Even stronger stars called electroweak stars would convert quarks in their cores into leptons, providing additional pressure to stop the star from collapsing. If, as some extensions of the Standard Model posit, quarks and leptons are made up of the even-smaller fundamental particles called preons, a very compact star could be supported by preon degeneracy pressure. While none of these hypothetical models can explain all of the observations of stellar black hole candidates, a Q star is the only alternative which could significantly exceed the mass limit for neutron stars and thus provide an alternative for supermassive black holes.: 12 A few theoretical objects have been conjectured to match observations of astronomical black hole candidates identically or near-identically, but which function via a different mechanism. A dark energy star would convert infalling matter into vacuum energy; This vacuum energy would be much larger than the vacuum energy of outside space, exerting outwards pressure and preventing a singularity from forming. A black star would be gravitationally collapsing slowly enough that quantum effects would keep it just on the cusp of fully collapsing into a black hole. A gravastar would consist of a very thin shell and a dark-energy interior providing outward pressure to stop the collapse into a black hole or formation of a singularity; It could even have another gravastar inside, called a 'nestar'. Open questions According to the no-hair theorem, a black hole is defined by only three parameters: its mass, charge, and angular momentum. This seems to mean that all other information about the matter that went into forming the black hole is lost, as there is no way to determine anything about the black hole from outside other than those three parameters. When black holes were thought to persist forever, this information loss was not problematic, as the information can be thought of as existing inside the black hole. However, black holes slowly evaporate by emitting Hawking radiation. This radiation does not appear to carry any additional information about the matter that formed the black hole, meaning that this information is seemingly gone forever. This is called the black hole information paradox. Theoretical studies analyzing the paradox have led to both further paradoxes and new ideas about the intersection of quantum mechanics and general relativity. While there is no consensus on the resolution of the paradox, work on the problem is expected to be important for a theory of quantum gravity.: 126 Observations of faraway galaxies have found that ultraluminous quasars, powered by supermassive black holes, existed in the early universe as far as redshift z ≥ 7 {\displaystyle z\geq 7} . These black holes have been assumed to be the products of the gravitational collapse of large population III stars. However, these stellar remnants were not massive enough to produce the quasars observed at early times without accreting beyond the Eddington limit, the theoretical maximum rate of black hole accretion. Physicists have suggested a variety of different mechanisms by which these supermassive black holes may have formed. It has been proposed that smaller black holes may have also undergone mergers to produce the observed supermassive black holes. It is also possible that they were seeded by direct-collapse black holes, in which a large cloud of hot gas avoids fragmentation that would lead to multiple stars, due to low angular momentum or heating from a nearby galaxy. Given the right circumstances, a single supermassive star forms and collapses directly into a black hole without undergoing typical stellar evolution. Additionally, these supermassive black holes in the early universe may be high-mass primordial black holes, which could have accreted further matter in the centers of galaxies. Finally, certain mechanisms allow black holes to grow faster than the theoretical Eddington limit, such as dense gas in the accretion disk limiting outward radiation pressure that prevents the black hole from accreting. However, the formation of bipolar jets prevent super-Eddington rates. In fiction Black holes have been portrayed in science fiction in a variety of ways. Even before the advent of the term itself, objects with characteristics of black holes appeared in stories such as the 1928 novel The Skylark of Space with its "black Sun" and the "hole in space" in the 1935 short story Starship Invincible. As black holes grew to public recognition in the 1960s and 1970s, they began to be featured in films as well as novels, such as Disney's The Black Hole. Black holes have also been used in works of the 21st century, such as Christopher Nolan's science fiction epic Interstellar. Authors and screenwriters have exploited the relativistic effects of black holes, particularly gravitational time dilation. For example, Interstellar features a black hole planet with a time dilation factor of over 60,000:1, while the 1977 novel Gateway depicts a spaceship approaching but never crossing the event horizon of a black hole from the perspective of an outside observer due to time dilation effects. Black holes have also been appropriated as wormholes or other methods of faster-than-light travel, such as in the 1974 novel The Forever War, where a network of black holes is used for interstellar travel. Additionally, black holes can feature as hazards to spacefarers and planets: A black hole threatens a deep-space outpost in 1978 short story The Black Hole Passes, and a binary black hole dangerously alters the orbit of a planet in the 2018 Netflix reboot of Lost in Space. Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Mattel_Auto_Race] | [TOKENS: 1117]
Contents Mattel Auto Race Mattel Electronics Auto Race was released in 1976 by Mattel Electronics as the first handheld electronic game to use only solid-state electronics; it has no mechanical elements except the controls and on/off switch. Using hardware designed for calculators and powered by a nine-volt battery, the cars are represented by red LEDs on a playfield which covers only a small portion of the case. The audio consists of beeps. George J. Klose based the game on 1970s racing arcade video games and designed the hardware, with some hardware features added by Mark Lesser who also wrote the 512 bytes of program code. From a top-down perspective, the player controls a car on a three-lane track and moves between them with a switch. Opponent vehicles move toward the player, in an effect similar to vertical scrolling, and the player must avoid them. A second control shifts gears from 1-4, with the speed increasing for each. Auto Race was followed by other successful handheld sports games from Mattel, including Football and Baseball which were both programmed by Lesser. The Auto Race design was tweaked into multiple other handhelds, including Missile Attack (1976), which became Battlestar Galactica Space Alert (1978) as a tie-in with the Battlestar Galactica TV series, and Ski Slalom (1980). Auto Race was cloned in the Soviet Union as Elektronika IER-01. Gameplay The player's car is represented by a bright blip (a vertical dash sign) on the bottom of the screen. The player must make it to the top of the screen 4 times (4 laps) to win, but, while making it towards the top, the player must swerve past other cars using the switch at the bottom of the system to toggle among three lanes. If hit by a car, the player's vehicle keeps moving back towards the bottom of the screen until it gets out of the other car's way. The goal is to beat the game with the shortest time possible before the 99 seconds given (as high as the two-digit timer can show) are up. The player's car has four gears and the higher the gear, the faster the other cars come at it. The manual assigns ratings to completion times: Development George J. Klose, a product development engineer at Mattel, came up with the concept of repurposing standard calculator hardware to create a handheld electronic game using individual display segments as blips that would move on the display. He designed the gameplay for Mattel Auto Race, inspired by auto racing games found in video arcades in the 1970s. He built a proof of concept with a blip moving on an LED display without using a microprocessor to get approval from Mattel for further development. He then looked for a manufacturer to provide a circuit board that would fit into a compact package. Klose and his manager Richard Cheng approached the Microelectronics Division of Rockwell International, a leader in designing handheld calculator chips, to supply Mattel with the hardware and provide technical support. Mark Lesser, a circuit design engineer at Rockwell International, modified the B5000 calculator chip, adding a display driver multiplexing scheme to the hardware and a custom sound driver for a piezo-ceramic speaker, resulting in the B6000 chip used in Auto Race. Sound is produced by toggling the speaker in embedded timing loops from within the program itself. Without prior programming experience, Lesser wrote the game in assembly language for the 512 bytes of ROM. He spent eighteen months getting the code to fit. Reception Sales of Mattel Auto Race exceeded expectations. Mattel in the 1970s, known mostly for Barbie dolls and Hot Wheels, was at first skeptical of products based on electronics, especially at what was considered an expensive retail price at the time: US$24.99 (equivalent to $140 in 2025). The success of Auto Race convinced Mattel to proceed with the development of Mattel Football which was often sold out and in short supply, and this led to the creation of a new Mattel Electronics Division in 1978, which for a time was extremely profitable. Legacy Mattel pioneered the category of handheld electronic video games when it released Auto Race in 1976. It was the first in a line of sports handhelds including Football, Baseball, Basketball, Soccer, and Hockey, as well as non-sports games. Auto Race was reworked into Missile Attack, also released in 1976. NBC refused to air the Missile Attack commercial because of the dark theme of the game, and Mattel removed it from the market. It was reintroduced in 1978 based on the Battlestar Galactica TV series as Battlestar Galactica Space Alert. The player remains at the bottom of the playfield, and a fire button is used to shoot and destroy adversaries. If one reaches the center-bottom space on the playing field, the Galactica is considered destroyed and the game over. The 1980 Flash Gordon handheld is the same game with a different science fiction license, but was not released. In 1980, a reskinned Auto Race was released as Mattel Ski Slalom outside the US. The four gears are labeled SLALOM, BRONZE, SILVER, and GOLD. In 1983, a clone of Auto Race developed by the Ministry of Electronic Industry of the Soviet Union was released as Elektronika IER-01. References External links
========================================