text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/BlackRock] | [TOKENS: 5660] |
Contents BlackRock BlackRock, Inc. is an American multinational investment company. Founded in 1988, initially as an enterprise risk management and fixed income institutional asset manager, BlackRock is the world's largest asset manager, with $12.5 trillion in assets under management as of 2025. Headquartered in New York City, BlackRock has 70 offices in 30 countries and clients in 100 countries. BlackRock is the manager of the iShares group of exchange-traded funds, and along with Fidelity, Vanguard, and State Street, it is considered one of the Big Four index fund managers. Its Aladdin software keeps track of investment portfolios for many major financial institutions and its BlackRock Solutions division provides financial risk management services. As of 2025, BlackRock was ranked 210th on the Fortune 500 list of the largest U.S. corporations by revenue. BlackRock has sought to position itself as an industry leader in environmental, social, and governance (ESG) considerations in investments. The states of West Virginia, Florida, and Louisiana have divested money from or refuse to do business with the firm because of its ESG policies. BlackRock has been criticized for investing in companies involved in fossil fuels, the arms industry, the People's Liberation Army, and human rights violations in China. History BlackRock was founded in 1988 by Larry Fink, Robert S. Kapito, Susan Wagner, Barbara Novick, Ben Golub, Hugh Frater, Ralph Schlosstein, and Keith Anderson to provide institutional clients with asset management services from a risk-management perspective. Fink, Kapito, Golub, and Novick had worked together at First Boston, where Fink and his team were pioneers in the mortgage-backed securities market in the United States. During Fink's tenure, he had lost $90 million as head of First Boston. That experience was the motivation to develop what he and the others considered excellent risk management and fiduciary practices. Initially, Fink sought funding (for initial operating capital) from Peter Peterson of The Blackstone Group, who believed in Fink's vision of a firm devoted to risk management. Peterson called it Blackstone Financial Management. In exchange for a 50% stake in the bond business, Blackstone initially gave Fink and his team a $5 million credit line. Within months, the business had turned profitable, and by 1989 the group's assets had quadrupled to $2.7 billion. The proportion of the stake Blackstone owned also fell to 40%, compared to Fink's staff. By 1992, Blackstone had a stake of about 36% in the company, and Stephen A. Schwarzman and Fink were considering selling shares to the public. The firm adopted the name BlackRock, and was managing $17 billion in assets by the end of the year. At the end of 1994, BlackRock was managing $53 billion. In 1994, Schwarzman and Fink had an internal dispute over methods of compensation and equity. Fink wanted to share equity with new hires, to lure talent from banks, but Schwarzman did not want to further lower Blackstone's stake. They agreed to part ways, and Schwarzman sold BlackRock, a decision he later called a "heroic mistake". In June 1994, Blackstone sold a mortgage-securities unit with $23 billion in assets to PNC Financial Services for $240 million. The unit had traded mortgages and other fixed-income assets, and during the sales process it changed its name from Blackstone Financial Management to BlackRock Financial Management. Schwarzman remained with Blackstone, while Fink became chairman and CEO of BlackRock. On October 1, 1999, BlackRock became a public company, selling shares at $14 each via an initial public offering on the New York Stock Exchange. By the end of 1999, BlackRock was managing $165 billion in assets. BlackRock grew both organically and by acquisition. In 2000, the firm launched BlackRock Solutions to provide risk management and investment analytics to institutional investors and other large investment managers. The platform includes advisory services and technology, being based on BlackRock's Aladdin System, an acronym for Asset Liability and Debt and Derivative Investment Network. In August 2004, BlackRock made its first major acquisition, buying State Street Research & Management's holding company SSRM Holdings, Inc. from MetLife for $325 million in cash and $50 million in stock. The acquisition increased BlackRock's assets under management from $314 billion to $325 billion. The deal included the mutual-fund business State Street Research & Management in 2005. BlackRock merged with Merrill Lynch's Investment Managers division (MLIM) in 2006, halving PNC's ownership and giving Merrill a 49.5% stake in the company. In October 2007, BlackRock acquired the fund-of-funds business of Quellos Capital Management. In April 2009, BlackRock acquired R3 Capital Management, LLC and management of its $1.5 billion fund. In May 2009, BlackRock Solutions was retained by the U.S. Treasury Department to analyze, unwind, and price the toxic assets that were owned by Bear Stearns, American International Group, Freddie Mac, Morgan Stanley, and other financial firms that were affected in the 2008 financial crisis. The Federal Reserve allowed BlackRock to superintend the $130 billion-debt settlement of Bear Stearns and American International Group. In February 2010, to raise capital needed due to the 2008 financial crisis, Barclays sold its Barclays Global Investors (BGI) unit, which included its exchange traded fund business, iShares, to BlackRock for US$13.5 billion and Barclays acquired a near-20% stake in BlackRock. On April 1, 2011, BlackRock was added as a component of the S&P 500 stock market index. In 2013, Fortune listed BlackRock on its annual list of the world's 50 Most Admired Companies. In 2014, BlackRock's $4 trillion under management made it the "world's biggest asset manager". At the end of 2014, the Sovereign Wealth Fund Institute reported that 65% of Blackrock's assets under management were made up of institutional investors. By June 30, 2015, BlackRock had US$4.721 trillion of assets under management. On August 26, 2015, BlackRock entered into a definitive agreement to acquire FutureAdvisor, a digital wealth management provider with reported assets under management of $600 million. Under the deal, FutureAdvisor would operate as a business within BlackRock Solutions (BRS). BlackRock announced in November 2015 that they would wind down the BlackRock Global Ascent hedge fund after losses. The Global Ascent fund had been its only dedicated global macro fund, as BlackRock was "better known for its mutual funds and exchange traded funds." At the time, BlackRock managed $51 billion in hedge funds, with $20 billion of that in funds of hedge funds. In March 2017, BlackRock, after a six-month review led by Mark Wiseman, initiated a restructuring of its $8 billion actively managed fund business, resulting in the departure of seven portfolio managers and a $25 million charge in the second quarter, replacing certain funds with quantitative investment strategies. By April 2017, iShares business accounted for $1.41 trillion, or 26%, of BlackRock's total assets under management, and 37% of BlackRock's base fee income. Also in April 2017, BlackRock backed the inclusion of mainland Chinese shares in MSCI's global index for the first time. In January 2020, PNC Financial Services sold its stake in BlackRock for $14.4 billion. In March 2020, the Federal Reserve chose BlackRock to manage two corporate bond-buying programs in response to the COVID-19 pandemic. This also included the $500 billion Primary Market Corporate Credit Facility (PMCCF) and the Secondary Market Corporate Credit Facility (SMCCF), as well as purchase by the Federal Reserve of commercial mortgage-backed securities (CMBS) guaranteed by Government National Mortgage Association, Fannie Mae, or Freddie Mac. In August 2020, BlackRock received approval from the China Securities Regulatory Commission to set up a mutual fund business in the country. This made BlackRock the first global asset manager to get consent from the Chinese government to start operations in the country. In October 2021 BlackRock launched its Voting Choice program, enabling institutional clients invested in index funds to participate in shareholder voting. Eligible clients can vote all issues, vote only on some issues, select from 14 different voting policies, or allow BlackRock's investment stewardship team to vote for them. BlackRock Investment Stewardship is a team of approximately 70 analysts who engage with the boards and management teams of companies, and vote shares, on the behalf of non-voting clients. In November 2021, BlackRock lowered its investment in India while increasing investment in China. The firm maintains a dedicated India Fund, through which it invests in Indian start-ups Byju's, Paytm, and Pine Labs. On December 28, 2022, it was announced that BlackRock and Volodymyr Zelensky had coordinated a role for the company in the reconstruction of Ukraine. This was after BlackRock CEO Larry Fink and Zelensky met over a video conference in September, 2022. In July 2025, Bloomberg reported that BlackRock had paused efforts earlier in the year to line up investors for a proposed Ukraine recovery fund amid increased uncertainty. The company said it had completed its pro bono advisory work in 2024 and no longer had an active mandate. In April 2023, the company was hired to sell $114 billion in assets of Signature Bank and Silicon Valley Bank after the 2023 United States banking crisis. In June 2023, BlackRock filed an application with the United States Securities and Exchange Commission (SEC) to launch a Spot Bitcoin Exchange-Traded Fund (ETF), and in November 2023 it filed another application for a Spot Ethereum ETF. The spot bitcoin ETF filing and 10 others were approved on January 10, 2024. On January 19, 2024, the iShares Bitcoin Trust ETF (IBIT) was the first spot bitcoin ETF to reach $1 billion in volume. In July 2023, the company appointed Amin H. Nasser to its board. Nasser, the chief executive officer of Saudi Aramco, the world's largest oil company, will fill BlackRock's board vacancy left by Bader Alsaad in 2024. In July 2023, Jio Financial Services (JFS) entered the asset management business through a 50:50 joint venture with BlackRock, named Jio BlackRock. In August 2023, BlackRock signed an agreement with New Zealand to establish a NZ$2 billion investment fund for solar, wind, green hydrogen, battery storage, and EV charging projects as part of its goal of reaching 100% renewable energy by 2030. In January 2024, BlackRock announced that it would acquire the investment fund Global Infrastructure Partners for $12.5 billion. BlackRock agreed to pay $3 billion in cash and 12 million of its own shares as part of the deal to buy GIP. In March 2024, BlackRock launched their first tokenized fund, the BlackRock USD Institutional Digital Liquidity Fund (BUIDL) on Ethereum, which represents investments in U.S. Treasury bills and repo agreements. The fund secured $245 million in assets in the first week. On July 15, 2024, BlackRock removed from circulation an advertisement filmed in 2022 that briefly featured Thomas Matthew Crooks, the gunman in the attempted assassination of Donald Trump. The firm expanded its headquarters at 50 Hudson Yards in mid-2024. In March 2025, BlackRock launched its first European bitcoin exchange-traded product, the iShares Bitcoin ETP, domiciled in Switzerland and listed in Paris, Amsterdam, and Frankfurt. In the United States, BlackRock's iShares Bitcoin Trust (IBIT) launched in January 2024, and by May 2024, was reported as the world's largest bitcoin fund, with nearly $20 billion in assets. In April 2025, BlackRock filed to the SEC to launch a new share class of its $150 billion money market fund that is registered on a blockchain. In June 2025, BlackRock launched the iShares Texas Equity ETF (TEXN), a Texas‑focused fund the firm said would invest in companies headquartered in the state. Finances In 2020, the non-profit American Economic Liberties Project issued a report highlighting the fact that "the 'Big Three' asset management firms – BlackRock, Vanguard and State Street – manage over $15 trillion in combined global assets under management, an amount equivalent to more than three-quarters of U.S. gross domestic product." The report called for structural reforms and better regulation of the financial markets. In 2021, BlackRock managed over $10 trillion in assets under management, about 40% of the GDP of the United States (nominal $25.347 trillion in 2022). BlackRock reported $12.53 trillion in assets under management as of June 30, 2025. Mergers and acquisitions Ownership In March 2018, PNC was BlackRock's largest shareholder with 25% of all shares, but announced in May 2020 that it intended to sell all shares worth $17 billion. Over 80% of BlackRock is owned by institutional investors. The 10 largest shareholders, as of June 30, 2024, are listed in the following table: Controversies Due to its power and the sheer size and scope of its financial assets and activities, BlackRock has been called the world's largest shadow bank by The Economist and Basler Zeitung. In 2020, U.S. Representatives Katie Porter and Chuy García proposed a U.S. House bill aiming to restrain BlackRock and other shadow banks. On March 4, 2021, U.S. Senator Elizabeth Warren suggested that BlackRock should be designated "too big to fail" and regulated accordingly. BlackRock invests the funds of its clients (for example, the owners of iShares exchange-traded fund units) in numerous publicly traded companies, some of which compete with each other. Because of the size of BlackRock's funds, the company is among the top shareholders of many companies. BlackRock states these shares are ultimately owned by the company's clients, not by BlackRock itself—a view shared by multiple independent academics—but acknowledges it can exercise shareholder votes on behalf of these clients, in many cases without client input. In the 2020s, the company took steps to allow institutional investors to participate in shareholder voting in nearly half of BlackRock's equity index assets. BlackRock has been the subject of conspiracy theories, including one in which BlackRock is the owner of both Fox News and Dominion Voting Systems. While it is true that BlackRock owns 15% of Fox Corporation, Snopes described the whole claim as "false" and PolitiFact described it as "mostly false" because of the absence of participation of BlackRock into Dominion Voting Systems. Some BlackRock conspiracy theories have incorporated antisemitism, such as the conspiracy theory that Jewish people, including BlackRock founder Robert Kapito, are part of a cabal responsible for COVID-19 and a "COVID agenda". "Common ownership" is when competitors within a market are commonly owned. In other words, these competitors can have the same powerful shareholders if their shares trade publicly. Common ownership has the potential to cause harm because it reduces market competition, which can be bad for consumers and the economy. Common ownership can also lead to higher pay for CEOs, even as competition between the firms is reduced. In response to a widely cited research paper that showed that common ownership raises prices, BlackRock issued a white paper calling the mechanism "vague and implausible". At the same time, BlackRock's 2023 SEC annual report identified common ownership as a potential business risk, noting that "in 2023, the FTC and DOJ released new merger guidelines recognizing that common ownership may reduce competitive incentives" and that "policy solutions could, in turn, adversely affect BlackRock." The company disclosed that common ownership "may be given greater consideration in regulatory investigations, studies, rule proposals, policy decisions and/or the scrutiny of mergers and acquisitions." In 2017, BlackRock expanded its environmental, social and corporate governance (ESG) projects with new staff and products. BlackRock started drawing attention to environmental and diversity issues by means of official letters to CEOs and shareholder votes together with activist investors or investor networks such as the Carbon Disclosure Project, which in 2017 backed a shareholder resolution for ExxonMobil to act on climate change. In 2018, it asked Russell 1000 companies to improve gender diversity on their board of directors if they had fewer than two women on them. In August 2021, a former BlackRock executive who had served as the company's first global chief investment officer for sustainable investing, said he thought the firm's ESG investing was a "dangerous placebo that harms the public interest." The former executive said that financial institutions are motivated to engage in ESG investing because ESG products have higher fees, which in turn increase company profits. In October 2021, The Wall Street Journal editorial board wrote that BlackRock was pushing the U.S. Securities and Exchange Commission to adopt rules requiring private companies to publicly disclose their climate impact, the diversity of their boards of directors, and other metrics. The editorial board opined that "ESG mandates, which also carry substantial litigation and reputation risks, will cause many companies to shun public markets. This would hurt stock exchanges and asset managers, but most of all retail investors." In January 2022, BlackRock founder and CEO Larry Fink defended the company's focus on ESG investing, pushing back "against accusations the asset manager was using its heft and influence to support a politically correct or progressive agenda." Fink said the practice of ESG "is not woke". BlackRock's emphasis on ESG has drawn criticism as "either bowing to anti-business interests" or being "merely marketing". In a talk at the Aspen Ideas Festival in June 2023, BlackRock CEO Larry Fink said he has stopped using the term "ESG" because the term has been "weaponized". According to an Axios reporter, Fink said, "I'm ashamed of being part of this conversation." Later, according to Axios, Fink said, "I never said I was ashamed. I'm not ashamed. I do believe in conscientious capitalism." In July 2023, BlackRock announced that it would allow retail investors a proxy vote in its biggest ETF from 2024. The move was initiated in the context of claims from US Republicans that BlackRock is systematically trying to push a 'woke agenda' through its pro-ESG activities. Under the plan, investors in BlackRock's iShares Core S&P 500 ETF will be asked to make choices from seven different general policies ranging from voting generally with BlackRock's management, to environmental, social and governance factors or prioritizing Catholic values. Investors will not be able to vote on specific companies. The Editorial Board at The Wall Street Journal argued that it amounted to a "false voting choice" since almost all of the pre-selected voting policies are devised by the ESG-aligned proxy advisories Glass Lewis and Institutional Shareholder Services. As of December 2018, BlackRock was the world's largest investor in coal-fired power stations, holding shares worth $11 billion in 56 companies in the industry. BlackRock owned more oil, gas, and thermal coal reserves than any other investment management company with total reserves amounting to 9.5 gigatonnes of CO2 emissions or 30% of total energy-related emissions from 2017. Environmental groups including the Sierra Club, as well as Amazon Watch, launched a campaign in September 2018 called "BlackRock's Big Problem", claiming that BlackRock is the "biggest driver of climate destruction on the planet", in part due to its opposition to fossil fuel divestment. In 2019, climate activists carried out street theatre and glued themselves to the door of the company's London offices. On January 10, 2020, a group of climate activists rushed inside the Paris offices of BlackRock France, painting walls and floors with warnings and accusations on the responsibility of the company in the effects of global warming. In May 2019, BlackRock was criticized for the environmental impact of its holdings as it was a major shareholder in every oil supermajor except Total S.A. and in 7 of the 10 biggest coal producers. On January 14, 2020, the company shifted its investment policy; BlackRock CEO Larry Fink said that environmental sustainability would be a key goal for investment decisions. BlackRock announced that it would sell $500 million worth of coal-related assets, and created funds that would not invest in companies profiting from fossil fuels. Nonetheless, BlackRock's support for shareholder resolutions requesting climate risk disclosure fell from 25% in 2019 to 14% in 2020. BlackRock has also been criticized regarding climate change inaction and deforestation in the Amazon rainforest. According to The New Republic, BlackRock "has positioned itself as the good guy on Wall Street, and its executives as a crew of mild-mannered money managers who understand the risks of the climate crisis and the importance of diversity. But those commitments, critics say, only extend so far into the firm's day-to-day operations." According to IESE, BlackRock has indeed influenced polluting companies to lower their carbon emissions. The study showed that companies who met with BlackRock's CEO Larry Fink had lower CO2 emissions the following year. Another study showed that Blackrock has mislabelled some investment funds, having only 9% out of 82 funds labelled as sustainable which did not invest in fossil fuel companies. In May 2018, anti-gun protesters held a demonstration outside the company's annual general meeting in Manhattan. After discussions with firearms manufacturers and distributors, on April 5, 2018, BlackRock introduced two new exchange-traded funds (ETFs) that exclude stocks of gun makers and large gun retailers such as Walmart, Dick's Sporting Goods, Kroger, Sturm Ruger, American Outdoor Brands, and Vista Outdoor, and removed the stocks from seven existing ESG funds. The European Ombudsman opened an inquiry in May 2020 to inspect the commission's file on the European Commission's decision to award a contract to BlackRock to carry out a study on integrating environmental, social and governance risks and objectives into EU banking rules ('the prudential framework'). European Parliament members questioned the impartiality of BlackRock given its investments in the sector. Riley Moore, the state treasurer of West Virginia, said in June 2022 that BlackRock and five other financial institutions would no longer be allowed to do business with the state of West Virginia because of the company's advocacy against the fossil fuel industry. In October 2022, Louisiana removed $794 million from BlackRock due to the company's support of ESG and green energy. In December 2022, Jimmy Patronis, the chief financial officer of Florida, announced that the government of Florida would be divesting $2 billion worth of investments under management by BlackRock due to the firm's move to strengthen ESG standards and ESG policies. BlackRock later responded to the announcement with a statement stating that the divestment would place politics over investor interest. In August 2021, BlackRock set up its first mutual fund in China after raising over one billion dollars from 111,000 Chinese investors. BlackRock became the first foreign-owned company allowed by the Chinese government to operate a wholly owned business in China's mutual fund industry. Writing in The Wall Street Journal, George Soros described BlackRock's initiative in China as a "tragic mistake" that would "damage the national security interests of the U.S. and other democracies." In October 2021, conservative non-profit group Consumers' Research launched an ad campaign criticizing BlackRock's relationship with the Chinese government in response to BlackRock's climate change activism, as part of Consumers' Research's campaign against perceived "wokeness" in American corporations. In December 2021, it was reported that BlackRock was an investor in two companies that had been blacklisted by the US government accusing China of human rights abuses against the Uyghurs in Xinjiang. In one case (Hikvision) BlackRock increased its level of investment after the company's blacklisting. In August 2023, the US House of Representatives' Select Committee on the Chinese Communist Party initiated an investigation into the firm's investments in Chinese companies accused of violating human rights and aiding the People's Liberation Army. In April 2024, a committee report said index inclusion and fund investments had directed at least $1.9 billion into entities "blacklisted" by U.S. authorities. The committee noted the activity was legal under then‑current rules and urged policy changes. BlackRock said its products comply with U.S. law and client mandates. In February 2025, a group of 17 U.S. state attorneys general criticized BlackRock for making improper or inadequate disclosures about investments in China. BlackRock was scrutinized for allegedly taking advantage of its close ties with the Federal Reserve during the COVID-19 pandemic response efforts. In June 2020, The New Republic wrote that BlackRock "was having a very good pandemic" and was casting "itself as socially responsible while contributing to the climate catastrophe, evading regulatory scrutiny, and angling to influence [a potential] Biden administration." The Financial Times described BlackRock as having secured a prominent advisory role in the Fed's post-COVID asset purchase program, prompting concerns over whether BlackRock would use its influence to encourage the Fed to purchase BlackRock products; during the Fed's 2020 quantitative easing program, BlackRock's corporate bond ETF received $4.3 billion in new investment, compared to the respective $33 million and $15 million received by BlackRock's competitors Vanguard Group and State Street. BlackRock pledged to forgo profits from the Fed's program other than its financial markets advisory fees. It pledged additional income earned on its bond ETFs as a result of the buying program be returned to the Federal Reserve. On May 26, 2020, the contract was released publicly. The New York Times wrote about the contract that BlackRock "will earn no more than $7.75 million per year for the main bond portfolio it will manage. It will also be prohibited from earning fees on the sale of bond-backed exchange traded funds, a segment of the market it dominates." In January 2026, U.S. Treasury Secretary Scott Bessent confirmed that BlackRock's CIO of Global Fixed Income, Rick Rieder, is one of four short-listed candidates under consideration by President Trump to succeed Federal Reserve Chair Jerome Powell when his term ends, in May 2026. Key people As of 2025, BlackRock has an 18-member board of directors, including: People who have previously served on the BlackRock board of directors include: See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Dose_rate] | [TOKENS: 241] |
Contents Dose rate A dose rate is quantity of radiation absorbed or delivered per unit time. It is often indicated in micrograys per hour (μGy/h) or as an equivalent dose rate ḢT in rems per hour (rem/hr) or sieverts per hour (Sv/h). Dose and dose rate are used to measure different quantities in the same way that distance and speed are used to measure different quantities. When considering stochastic radiation effects, only the total dose is relevant; each incremental unit of dose increases the probability that the stochastic effect happens. When considering deterministic effects, the dose rate also matters. The total dose can be above the threshold for a deterministic effect, but if the dose is spread out over a long period of time, the effect is not observed. Consider the sunburn, a deterministic effect: when exposed to bright sunlight for only ten minutes at a high UV Index, that is to say a high average dose rate, the skin can turn red and painful. The same total amount of energy from indirect sunlight spread out over several years - a low average dose rate - would not cause a sunburn at all, although it may still cause skin cancer. References |
======================================== |
[SOURCE: https://techcrunch.com/author/isabelle-johannessen/] | [TOKENS: 349] |
Save up to $680 on your pass with Super Early Bird rates. REGISTER NOW. Save up to $680 on your Disrupt 2026 pass. Ends February 27. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us Isabelle Johannessen Head of the Startup Battlefield Program, TechCrunch Isabelle leads Startup Battlefield, TechCrunch’s iconic launchpad and competition for the world’s most promising early-stage startups. You can contact or verify outreach from Isabelle by emailing isabelle.johannessen@techcrunch.com. She scouts top founders across 99+ countries and prepares them to pitch on the Disrupt stage in front of tier-one investors and global media. Before TechCrunch, she designed and led international startup acceleration programs across Japan, Korea, Italy, and Spain—connecting global founders with VCs and helping them successfully enter the U.S. market. With a Master’s in Entrepreneurship & Disruptive Innovation—and a past life as a professional singer—she brings a blend of strategic rigor and stage presence to help founders craft compelling stories and stand out in crowded markets. Latest from Isabelle Johannessen 8 Episodes Last update: Feb 2026 Build Mode is a survival guide for early-stage founders navigating the messy, high-stakes chaos of building a company from scratch…. Startup Battlefield news and updates, for new companies and alumni alike. © 2025 TechCrunch Media LLC. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Ishango_bone] | [TOKENS: 1797] |
Contents Ishango bone The Ishango bone, discovered at the "Fisherman Settlement" of Ishango in the Democratic Republic of the Congo, is a bone tool and possible mathematical device that dates to the Upper Paleolithic era. The curved bone is dark brown in color, about 10 centimeters in length, and features a sharp piece of quartz affixed to one end, perhaps for engraving. Because the bone has been narrowed, scraped, polished, and engraved to a certain extent, it is no longer possible to determine what animal the bone belonged to, although it is assumed to have been a mammal. The ordered engravings have led many to speculate the meaning behind these marks, including interpretations like mathematical significance or astrological relevance. It is thought by some to be a tally stick, as it features a series of what has been interpreted as tally marks carved in three columns running the length of the tool, although it has also been suggested that the scratches might have been to create a better grip on the handle or for some other non-mathematical reason. Others argue that the marks on the object are non-random and that it was likely a kind of counting tool and used to perform simple mathematical procedures. Other speculations include the engravings on the bone serving as a lunar calendar. Dating to 20,000 years before present, it has been described as "the oldest mathematical tool of humankind", although older engraved bones are also known, such as the approximately 26,000-year-old "Wolf Bone" from Dolni Vestonice in the Czech Republic, and the approximately 42,000-year-old Lebombo bone from southern Africa. History The Ishango bone was found in 1950 by Belgian Jean de Heinzelin de Braucourt while exploring what was then the Belgian Congo. It was discovered in the area of Ishango near the Semliki River. Lake Edward empties into the Semliki, which forms part of the headwaters of the Nile River (now on the border between modern-day Uganda and D.R. Congo). Some archaeologists believe the prior inhabitants of Ishango were a "pre-sapiens species". However, the most recent inhabitants, who gave the area its name, have no immediate connections with the primary settlement, which was "buried in a volcanic eruption". On an excavation, de Heinzelin discovered a bone about the "size of a pencil" amongst human remains and many stone tools in a small community that fished and gathered in this area of Africa. Professor de Heinzelin brought the Ishango bone to Belgium, where it was stored in the treasure room of the Royal Belgian Institute of Natural Sciences in Brussels. Several molds and copies were created from the petrified bone in order to preserve the delicate nature of the fragile artifact while being exported. A written request to the museum was required to see the artifact, as it was no longer on public display. The artifact was first estimated to have originated between 9,000 BCE and 6,500 BCE, making it from between 8,500 and 11,000 years in age, but numerous other analyses suggested the bone could be as old as 44,000 years. The dating of the site where it was discovered was re-evaluated, however, and it is now believed to be about 20,000 years old (dating from between 18,000 BCE and 20,000 BCE). The dating of this bone is widely debated in the archaeological community, as the ratio of carbon isotopes was upset by nearby volcanic activity. Interpretations The 168 etchings on the bone are ordered in three parallel columns along the length of the bone, each marking with a varying orientation and length. The first column, or central column along the most curved side of the bone, is referred to as the M column, from the French word milieu (middle). The left and right columns are respectively referred to as G and D, or gauche (left) and droite (right) in French. The parallel markings have led to various tantalizing hypotheses, such as that the implement indicates an understanding of decimals or prime numbers. Though these propositions have been questioned, it is considered likely by many scholars that the tool was used for mathematical purposes, perhaps including simple mathematical procedures or to construct a numeral system. The discoverer of the Ishango bone, de Heinzelin, suggested that the bone was evidence of knowledge of simple arithmetic, or at least that the markings were "deliberately planned". He based his interpretation on archaeological evidence, comparing "Ishango harpoon heads to those found in northern Sudan and ancient Egypt". This comparison led to the suggestion of a link between arithmetic processes conducted at Ishango with the "commencement of mathematics in ancient Egypt." The third column has been interpreted as a "table of prime numbers", as column G appears to illustrate prime numbers between 10 and 20, but this might be a coincidence. Historian of mathematics Peter S. Rudman argues that prime numbers were probably not understood until the early Greek period of about 500 BCE, and were dependent on the concept of division, which he dates to no earlier than 10,000 BCE. More recently, mathematicians Dirk Huylebrouck and Vladimir Pletser have proposed that the Ishango bone is a counting tool using the base 12 and sub-bases 3 and 4, and involving simple multiplication, somewhat comparable to a primitive slide rule. They have concluded, however, that the evidence is insufficient to confirm an understanding of prime numbers during this time period. Anthropologist Caleb Everett has also provided insight into interpretations of the bone, explaining that "the quantities evident in the groupings of marks are not random", and are likely evidence of prehistoric numerals. Everett suggests that the first column may reflect some "doubling pattern" and that the tool might have been used for counting and multiplication and also possibly as a "numeric reference table". George Gheverghese Joseph wrote: "A single bone with suggestive markings raises interesting possibilities of a highly developed sense of arithmetical awareness; it does not provide conclusive evidence." He suggested that it might "represent an early calendar of events of a ceremonial or ritual nature superimposed on a record of a lunar/menstrual cycle constructed by a woman" or be a precursor of writing. Caution in interpretation is warranted by the existence of a second bone found in the same archaeological layer, one whose marks are not mathematically suggestive. Alexander Marshack, an archaeologist from Harvard University, speculated that the Ishango bone represents numeric notation of a six-month lunar calendar after conducting a "detailed microscopic examination" of the bone. This idea arose from the fact that the markings on the first two rows adds up to 60, corresponding with two lunar months, and the sum of the number of carvings on the last row being 48, or a month and a half. Marshack generated a diagram comparing the different sizes and phases of the Moon with the notches of the Ishango bone. There is some circumstantial evidence to support this alternate hypothesis, being that present day African societies utilize bones, strings, and other devices as calendars. Critics in the field of archaeology have concluded, however, that Marshack's interpretation is flawed, describing that his analysis of the Ishango bone confines itself to a simple search for a pattern, rather than an actual test of his hypothesis. This has also led Claudia Zaslavsky to suggest that the creator of the tool could have been a woman, tracking the lunar phase in relation to the menstrual cycle. Mathematician Olivier Keller warns against the urge to project modern culture's perception of numbers onto the Ishango bone. Keller explains that this practice encourages observers to negate and possibly ignore alternative symbolic materials, those which are present in a range of media (on human remains, stones and cave art) from the Upper Paleolithic era and beyond which also deserve equitable investigation. Dirk Huylebrouck, in a review of the research on the object, favors the idea that the Ishango bone had some advanced mathematical use: "Whatever the interpretation, the patterns surely show the bone was more than a simple tally stick." He also remarks that "to credit the computational and astronomical reading simultaneously would be far-fetched", quoting mathematician George Joseph, who stated: "A single bone may well collapse under the heavy weight of conjectures piled onto it." Similarly, George Joseph, in "The Crest of the Peacock: Non-European Roots of Mathematics" also asserted that the Ishango bone was "more than a simple tally". Moreover, he states: "Certain underlying numerical patterns may be observed within each of the rows marked." Even so, regarding various speculative theories of its exact mathematical use, Joseph concluded that several are plausible but uncertain. See also References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Cecil_(programming_language)] | [TOKENS: 274] |
Contents Cecil (programming language) Cecil is a pure object-oriented programming language that was developed by Craig Chambers at the University of Washington in 1992 to be part of the Vortex project there. Cecil has many similarities to other object-oriented languages, most notably Objective-C, Modula-3, and Self. The main goals of the project were extensibility, orthogonality, efficiency, and ease-of-use. The language supports multiple dispatch and multimethods, dynamic inheritance, and optional static type checking. Unlike most other OOP systems, Cecil allows subtyping and code inheritance to be used separately, allowing run-time or external extension of object classes or instances. Like Objective-C, all object services in Cecil are invoked by message passing, and the language supports run-time class identification. These features allow Cecil to support dynamic, exploratory programming styles. Parameterized types and methods (generics, polymorphism), garbage collection, and delegation are also supported. Cecil also supports a module mechanism for isolating independent libraries or packages. Cecil does not presently support threads or any other form of concurrency. A standard library for Cecil is also available and includes various collection, utility, system, I/O, and GUI classes. The Diesel language was the successor of Cecil. References External links This programming-language-related article is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Pigeon_post] | [TOKENS: 2784] |
Contents Pigeon post Pigeon post is the use of homing pigeons to carry messages. Pigeons are effective as messengers due to their natural homing abilities. The pigeons are transported to a destination in cages, where they are attached with messages, then the pigeon naturally flies back to its home where the recipient could read the message. They have been used in many places around the world. Pigeons have also been used to great effect in military situations, and are in this case referred to as war pigeons. Early history As a method of communication, it is likely as old as the ancient Persians, from whom the art of training the birds probably came. The Romans used pigeon messengers to aid their military over 2000 years ago. Frontinus said that Julius Caesar used pigeons as messengers in his conquest of Gaul. The Greeks conveyed the names of the victors at the Olympic Games to their various cities by this means. Naval chaplain Henry Teonge (c. 1620–1690) describes in his diary a regular pigeon postal service being used by merchants between İskenderun and Aleppo in the Levant. The Mughals also used messenger pigeons. Before the telegraph, this method of communication was used extensively in the financial field. The Dutch government established a civil and military system in Java and Sumatra early in the 19th century, the birds being obtained from Baghdad. In 1851, the German-born Paul Julius Reuter opened an office in the City of London which transmitted stock market quotations between London and Paris via the new Calais to Dover cable. Reuter had previously used pigeons to fly stock prices between Aachen and Brussels, a service that operated for a year until a gap in the telegraph link was closed. Details of the employment of pigeons during the siege of Paris in 1870–71 led to a revival in the training of pigeons for military purposes. Numerous societies were established for keeping pigeons of this class in all important European countries; and, in time, various governments established systems of communication for military purposes by pigeon post. After the pigeon post between military fortresses had been thoroughly tested, attention was turned to its use for naval purposes, to send messages to ships in nearby waters. It was also used by news agencies and private individuals at various times. Governments in several countries established lofts of their own. Laws were passed making the destruction of such pigeons a serious offense; premiums to stimulate efficiency were offered to private societies, and rewards given for the destruction of birds of prey. Before the advent of radio, pigeons were used by newspapers to report yacht races, and some yachts were actually fitted with lofts. During the establishment of formal pigeon post services, the registration of all birds was introduced. At the same time, in order to hinder the efficiency of the systems of foreign countries, difficulties were placed in the way of the importation of their birds for training, and in a few cases falcons were specially trained to interrupt the service during war, the Germans having set the example by employing hawks against the Paris pigeons in 1870–71. No satisfactory method of protecting the weaker birds seems to have been developed, though the Chinese formerly provided their pigeons with whistles and bells to scare away birds of prey. As radio telegraphy and telephony were developed, the use of pigeons became limited to fortress warfare[clarification needed] by the 1910s. Although the British Admiralty had attained a very high standard of efficiency, it discontinued its pigeon service in the early 20th century. In contrast, large numbers of birds were still kept by France, Germany and Russia at the outbreak of the First World War. In modern days, some rafting photographers still use pigeons as a sneakernet to transport digital photos on flash media from the camera to the tour operator. Paris The pigeon post that was in operation while Paris was besieged during the Franco-Prussian War of 1870–1871 is probably the most famous. Barely six weeks after the outbreak of hostilities, the Emperor Napoleon III and the French Army of Châlons surrendered at Sedan on 2 September 1870. There were two immediate consequences: the fall of the Second Empire and the swift Prussian advance on Paris. As had been expected, the normal channels of communication into and out of Paris were interrupted during the four-and-a-half months of the siege, and, indeed, it was not until the middle of February 1871 that the Prussians relaxed their control of the postal and telegraph services. With the encirclement of the city on 18 September, the last overhead telegraph wires were cut on the morning of 19 September, and the secret telegraph cable in the bed of the Seine was located and cut on 27 September. Although a number of postmen succeeded in passing through the Prussian lines in the earliest days of the siege, others were captured and shot, and there is no proof of any post, certainly after October, reaching Paris from the outside, apart from private letters carried by unofficial individuals. For an assured communication into Paris, the only successful method was by the time-honoured carrier-pigeon, and thousands of messages, official and private, were thus taken into the besieged city. During the course of the siege, pigeons were regularly taken out of Paris by balloon. Initially, one of the pigeons carried by a balloon was released as soon as the balloon landed so that Paris could be apprised of its safe passage over the Prussian lines. Soon a regular service was in operation, based first at Tours and later at Poitiers. The pigeons were taken to their base after their arrival from Paris and when they had preened themselves, been fed and rested, they were ready for the return journey. Tours lies some 200 km (100 miles) from Paris and Poitiers some 300 km (200 miles); to reduce the flight distance the pigeons were taken by train as far forward towards Paris as was safe from Prussian intervention. Before release, they were loaded with their despatches. The first despatch was dated 27 September and reached Paris on 1 October, but it was only from 16 October, when an official control was introduced, that a complete record was kept. The pigeons carried two kinds of despatch: official and private, both of which are later described in detail. The service was put into operation for the transmission of information from the Delegation to Paris and was opened to the public in early November. The private despatches were sent only when an official despatch was being sent, since the latter would have absolute priority. However, the introduction of the Dagron microfilms eased any problems there might have been in claims for transport since their volumetric requirements were very small. For example: one tube sent during January contained 21 microfilms, of which 6 were official despatches and 15 were private, while a later tube contained 16 private despatches and 2 official ones. In order to improve the chances of the despatches successfully reaching Paris, the same despatch was sent by several pigeons, one official despatch being repeated 35 times and the later private despatches were repeated on average 22 times. The records show that from 7 January to the end, 61 tubes were sent off, containing 246 official and 671 private despatches. The practice was to send off the despatches not only by pigeons of the same release but also of successive releases until Paris signaled the arrival of those despatches. When the pigeon reached its particular loft in Paris, its arrival was announced by a bell in the trap in the loft. Immediately, a watchman relieved it of its tube which was taken to the Central Telegraph Office where the content was carefully unpacked and placed between two thin sheets of glass. The photographs are said to have been projected by magic lantern on to a screen where the enlargement could be easily read and written down by a team of clerks. This would certainly be true for the microfilms, but the earlier despatches on photographic paper were read through microscopes. The transcribed messages were written out on forms (telegraph forms for private messages, with or without the special annotation "pigeon") and so delivered. The interval between sending a private message and its receipt by the addressee depended on many factors: the density of telegraphic traffic to and from the sender's town, the time taken to register the message, to pass it to the printers where it was assembled with its 3000 companions into a single page, and then to assemble the pages into nines or twelves or sixteens. During the four months of the siege, 150,000 official and 1 million private communications were carried into Paris by this method. The service was formally terminated on 1 February 1871. In fact, the last pigeons were released on 1 and 3 February. The pigeons that were still alive were now official property and were sold at the Depot du Mobilier de l'Etat. Their value as racing pigeons was reflected by the average price of only 1 franc 50 centimes, but two pigeons, reported to have made three journeys, were purchased by an enthusiast for 26 francs. The success of the pigeon post, both for official and for private messages, did not pass unnoticed by the military forces of the European powers and in the years that followed the Franco-Prussian War pigeon sections were established in their armies. The advent of wireless communication led to rising pigeon unemployment, although in certain particular applications pigeons provided the only method of communication. But never again were pigeons called upon to perform such a tremendous public service as that which they had maintained during the siege of Paris. Canada Major-General Donald Roderick Cameron, then Commandant of the Royal Military College of Canada in Kingston, Ontario recommended an international pigeon service for marine search and rescue and military service in a paper entitled "Messenger Pigeons, a National Question". Sir Charles Hibbert Tupper, then Minister of Marine and Fisheries supported the pigeon policy. Colonel Goldie, Assistant Adjutant General and Major Waldron of the Royal Artillery, and Captain Dopping-Hepenstal of the Royal Engineers carried through the plan. The pigeon post between look-out stations at lighthouses on islands and the mainland at the citadel in Halifax, Nova Scotia provided a messenger service from 1891 until it was discontinued in 1895. The pigeon post faced a heavy mortality among the pigeons as many were lost on the operations. The flight from the Citadel in Halifax, Nova Scotia to Sable Island, for example, was difficult for the pigeons to complete. Catalina Island From 1894 to 1898 pigeons carried mail from Avalon across the Santa Barbara Channel to Los Angeles. Two pigeon fanciers, brothers Otto J. and O. F. Zahn, reached an agreement with Western Union where it would not build a telegraph line to the isolated island so long as the pigeons did not compete with it on the mainland. Fifty birds were trained, carrying three copies of each message because of the danger of hunters and predators. They made the 48-mile passage in about one hour, bringing letters, news clippings from the Los Angeles Times, and emergency summons for doctors. In three seasons of operation only two letters failed to come through, but at $.50 to $1.00 per message the service was not profitable, and in 1898 the Zahn brothers ended the post. Great Barrier Island (New Zealand) Before the pigeon post service was established the only regular connection between the community on Great Barrier Island (90 kilometres northeast of Auckland) and the mainland was provided by a weekly coastal steamer. The island's isolation was highlighted when the ship SS Wairarapa was wrecked off its coast in 1894, with the loss of 121 lives, and the news took several days to reach the mainland. The pigeon post service began between the island and Auckland in 1897. Soon there were two rival pigeongram companies, both of which issued distinctive and attractive stamps. The stamps have been eagerly collected for their novelty value, and some have become extremely rare. Initially, the service operated only from Great Barrier Island to Auckland, the reverse route being considered uneconomic. On the island, pigeongram agencies were established at Port Fitzroy, Okupu, and Whangaparara. Birds were sent over to the island on the weekly steamer and flew back to Auckland with up to five messages per bird written on lightweight writing stock and attached to their legs. Great Barrier Island's pigeongram service ended when the first telegraph cable was laid between the island and the mainland in 1908. India The Orissa police in India have established regular pigeon posts at Cuttack, Chatrapur, Kendrapara, Sambalpur and Denkanal and these pigeons rose to the occasion in times of emergencies and natural calamities. During the centenary celebrations of the Indian postal service in 1954, the Orissa police pigeons demonstrated their capacity by conveying the message of inauguration from the President of India to the Prime Minister. The last of the pigeon post services in the world (the one in Cuttack, India) was closed in 2008, although about 150 pigeons continue to be maintained for ceremonial purposes in Cuttack and at the Police Training College in Angul. See also References External links Media related to Pigeon post at Wikimedia Commons |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Metropolitan_area_network] | [TOKENS: 1699] |
Contents Metropolitan area network A metropolitan area network (MAN) is a computer network that interconnects users with computer resources in a geographic region of the size of a metropolitan area. The term MAN is applied to the interconnection of local area networks (LANs) in a city into a single larger network which may then also offer efficient connection to a wide area network. The term is also used to describe the interconnection of several LANs in a metropolitan area through the use of point-to-point connections between them. History By 1999, local area networks (LANs) were well established and providing data communication in buildings and offices. For the interconnection of LANs within a city, businesses relied primarily on the public switched telephone network. But while the telephone network was able to support the packet-based exchange of data that the various LAN protocols implemented, the bandwidth of the telephone network was already under heavy demand from circuit-switched voice, and the telephone exchanges were ill-designed to cope with the traffic spikes that LANs tended to produce.: 11 To interconnect local area networks more effectively, it was suggested that office buildings are connected using the single-mode optical fiber lines, which were by that time widely used in long-haul telephone trunks. Such dark fibre links were in some cases already installed on customer premises and telephone companies started to offer their dark fibre within their subscriber packages. Fibre optic metropolitan area networks were operated by telephone companies as private networks for their customers and did not necessarily have full integration with the public wide area network (WAN) through gateways.: 12 Besides the larger companies that connected their offices across metropolitan areas, universities and research institutions also adopted dark fibre as their metropolitan area network backbone. In West Berlin the BERCOM project built up a multifunctional broadband communications system to connect the mainframe computers that publicly funded universities and research institutions in the city housed. The BERCOM MAN project could progress at speed because the Deutsche Bundespost had already installed hundreds of miles of fibre optic cable in West Berlin. Like other metropolitan dark fibre networks at the time, the dark fibre network in West Berlin had a star topology with a hub somewhere in the city centre.: 56 The backbone of the dedicated BERCOM MAN for universities and research institutions was an optical fibre double ring that used a high-speed slotted ring protocol developed by the GMD Research Centre for Innovative Computer Systems and Telephony. The BERCOM MAN backbone could thus support two times 280 Mbit/s data transfer.: 57 The productive use of dense wavelength-division multiplexing (DWDM) provided another impetus for the development of metropolitan area networks in the 2000s. Long haul DWDM, with ranges from 0 to 3000+ km, had been developed so that companies that stored large amounts of data on different sites could exchange data or establish mirrors of their file server. With the use of DWDM on the existing fibre optic MANs of carriers, companies no longer needed to connect their LANs with a dedicated fibre optic link.: 14 With DWDM companies could build dedicated MANs using the existing dark fibre network of a provider in a city. MANs thus became cheaper to build and maintain.: 15 The DWDM platforms provided by dark fibre providers in cities allow for a single fibre pair to be divided into 32 wavelengths. One wavelength could support between 10 Mbit/s and 10 Gbit/s. Thus companies that paid for a MAN to connect different office sites within a city could increase the bandwidths of their MAN backbone as part of their subscription. DWDM platforms also alleviated the need for protocol conversion to connect LANs in a city, because any protocol and any traffic type could be transmitted using DWDM. Effectively it gave companies wishing to establish a MAN choice of protocol.: 16 Metro Ethernet uses a fibre optic ring as a Gigabit Ethernet MAN backbone within a larger city. The ring topology is implemented using Internet Protocol (IP) so that data can be rerouted if a link is congested or fails. In the US the Sprint was an early adopter of fibre optic rings that routed IP packets on the MAN backbone. Between 2002 and 2003 Sprint built three MAN rings to cover San Francisco, Oakland and San Jose, and in turn connected these three metro rings with a further two rings. The Sprint metro rings routed voice and data, were connected to several local telecom exchange points and totalled 189 miles of fibre optic cable. The metro rings also connected many cities that went on to become part of the Silicon Valley tech-hub, such as Fremont, Milpitas, Mountain View, Palo Alto, Redwood City, San Bruno, San Carlos, Santa Clara and Sunnyvale. The metro Ethernet rings that did not route IP traffic instead used one of the various proprietary Spanning Tree Protocol implementations; each MAN ring had a root bridge. Because layer 2 switching can not operate if there is a loop in the network, the protocols to support L2 MAN rings all need to block redundant links and thus block part of the ring.: 41 Capsuling protocols, such as Multiprotocol Label Switching (MPLS), were also deployed to address the drawbacks of operating L2 metro Ethernet rings.: 43 Metro Ethernet was effectively the extension of Ethernet protocols beyond the local area network (LAN) and the ensuing investment in Ethernet led to the deployment of carrier Ethernet, where Ethernet protocols are used in wide area networks (WANs). The efforts of the Metro Ethernet Forum (MEF) in defining best practice and standards for metropolitan area networks thus also defined carrier Ethernet. While the IEEE tried to standardise the emerging Ethernet-based proprietary protocols, industry forums such as the MEF filled the gap and in January 2013 launched a certification for network equipment that can be configured to meet Carrier Ethernet 2.0 specifications. Metropolitan Internet exchange points Internet exchange points (IXs) have historically been important for the connection of MANs to the national or global Internet. The Boston Metropolitan Exchange Point (Boston MXP) enabled metro Ethernet providers, such as the HarvardNet to exchange data with national carriers, such as the Sprint Corporation and AT&T. Exchange points also serve as low-latency links between campus area networks, thus the Massachusetts Institute of Technology and the Boston University could exchange data, voice and video using the Boston MXP. Further examples of metropolitan Internet exchanges in the USA that were operational as of 2002[update] include the Anchorage Metropolitan Access Point (AMAP), the Seattle Internet Exchange (SIX), the Dallas-Fort Worth Metropolitan Access Point (DFMAP) and the Denver Internet Exchange (IX-Denver). Verizon put into operation three regional metropolitan exchanges to interconnect MANs and give them access to the Internet. The MAE-West serves the MANs of San Jose, Los Angeles and California. The MAE-East interconnects the MANs of New York City, Washington, D.C., and Miami. While the MAE-Central interconnects the MANs of Dallas, Texas, and Illinois. In larger cities several local providers may have built a dark fibre MAN backbone. In London, the metro Ethernet rings of several providers make up the London MAN infrastructure. Like other MANs, the London MAN primarily serves the needs of its urban customers, who typically need a high number of connections with low bandwidth, a fast transit to other MAN providers, as well as high bandwidth access to national and international long-haul providers. Within the MAN of larger cities, metropolitan exchange points now play a vital role. The London Internet Exchange (LINX) had by 2005 built up several exchange points across the Greater London region. Cities that host one of the international Internet exchanges have become a preferred location for companies and data centres. The Amsterdam Internet Exchange (AMS-IX) is the world's second-largest Internet exchange and has attracted companies to Amsterdam that are dependent on high-speed internet access. The Amsterdam metropolitan area network has benefited too from high-speed Internet access.: 105 Similarly Frankfurt has become a magnet for data centres of international companies because it hosts the non-profit DE-CIX, the largest Internet exchange in the world.: 116 The business model of the metro DE-CIX is to reduce the transit cost for local carriers by keeping data in the metropolitan area or region, while at the same time allowing long-haul low-latency peering globally with other major MANs. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Joke#cite_note-FOOTNOTEWildRoddenGroddRuch2003-34] | [TOKENS: 8460] |
Contents Joke A joke is a display of humour in which words are used within a specific and well-defined narrative structure to make people laugh and is usually not meant to be interpreted literally. It usually takes the form of a story, often with dialogue, and ends in a punch line, whereby the humorous element of the story is revealed; this can be done using a pun or other type of word play, irony or sarcasm, logical incompatibility, hyperbole, or other means. Linguist Robert Hetzron offers the definition: A joke is a short humorous piece of oral literature in which the funniness culminates in the final sentence, called the punchline… In fact, the main condition is that the tension should reach its highest level at the very end. No continuation relieving the tension should be added. As for its being "oral," it is true that jokes may appear printed, but when further transferred, there is no obligation to reproduce the text verbatim, as in the case of poetry. It is generally held that jokes benefit from brevity, containing no more detail than is needed to set the scene for the punchline at the end. In the case of riddle jokes or one-liners, the setting is implicitly understood, leaving only the dialogue and punchline to be verbalised. However, subverting these and other common guidelines can also be a source of humour—the shaggy dog story is an example of an anti-joke; although presented as a joke, it contains a long drawn-out narrative of time, place and character, rambles through many pointless inclusions and finally fails to deliver a punchline. Jokes are a form of humour, but not all humour is in the form of a joke. Some humorous forms which are not verbal jokes are: involuntary humour, situational humour, practical jokes, slapstick and anecdotes. Identified as one of the simple forms of oral literature by the Dutch linguist André Jolles, jokes are passed along anonymously. They are told in both private and public settings; a single person tells a joke to his friend in the natural flow of conversation, or a set of jokes is told to a group as part of scripted entertainment. Jokes are also passed along in written form or, more recently, through the internet. Stand-up comics, comedians and slapstick work with comic timing and rhythm in their performance, and may rely on actions as well as on the verbal punchline to evoke laughter. This distinction has been formulated in the popular saying "A comic says funny things; a comedian says things funny".[note 1] History in print Jokes do not belong to refined culture, but rather to the entertainment and leisure of all classes. As such, any printed versions were considered ephemera, i.e., temporary documents created for a specific purpose and intended to be thrown away. Many of these early jokes deal with scatological and sexual topics, entertaining to all social classes but not to be valued and saved.[citation needed] Various kinds of jokes have been identified in ancient pre-classical texts.[note 2] The oldest identified joke is an ancient Sumerian proverb from 1900 BC containing toilet humour: "Something which has never occurred since time immemorial; a young woman did not fart in her husband's lap." Its records were dated to the Old Babylonian period and the joke may go as far back as 2300 BC. The second oldest joke found, discovered on the Westcar Papyrus and believed to be about Sneferu, was from Ancient Egypt c. 1600 BC: "How do you entertain a bored pharaoh? You sail a boatload of young women dressed only in fishing nets down the Nile and urge the pharaoh to go catch a fish." The tale of the three ox drivers from Adab completes the three known oldest jokes in the world. This is a comic triple dating back to 1200 BC Adab. It concerns three men seeking justice from a king on the matter of ownership over a newborn calf, for whose birth they all consider themselves to be partially responsible. The king seeks advice from a priestess on how to rule the case, and she suggests a series of events involving the men's households and wives. The final portion of the story (which included the punch line), has not survived intact, though legible fragments suggest it was bawdy in nature. Jokes can be notoriously difficult to translate from language to language; particularly puns, which depend on specific words and not just on their meanings. For instance, Julius Caesar once sold land at a surprisingly cheap price to his lover Servilia, who was rumoured to be prostituting her daughter Tertia to Caesar in order to keep his favour. Cicero remarked that "conparavit Servilia hunc fundum tertia deducta." The punny phrase, "tertia deducta", can be translated as "with one-third off (in price)", or "with Tertia putting out." The earliest extant joke book is the Philogelos (Greek for The Laughter-Lover), a collection of 265 jokes written in crude ancient Greek dating to the fourth or fifth century AD. The author of the collection is obscure and a number of different authors are attributed to it, including "Hierokles and Philagros the grammatikos", just "Hierokles", or, in the Suda, "Philistion". British classicist Mary Beard states that the Philogelos may have been intended as a jokester's handbook of quips to say on the fly, rather than a book meant to be read straight through. Many of the jokes in this collection are surprisingly familiar, even though the typical protagonists are less recognisable to contemporary readers: the absent-minded professor, the eunuch, and people with hernias or bad breath. The Philogelos even contains a joke similar to Monty Python's "Dead Parrot Sketch". During the 15th century, the printing revolution spread across Europe following the development of the movable type printing press. This was coupled with the growth of literacy in all social classes. Printers turned out Jestbooks along with Bibles to meet both lowbrow and highbrow interests of the populace. One early anthology of jokes was the Facetiae by the Italian Poggio Bracciolini, first published in 1470. The popularity of this jest book can be measured on the twenty editions of the book documented alone for the 15th century. Another popular form was a collection of jests, jokes and funny situations attributed to a single character in a more connected, narrative form of the picaresque novel. Examples of this are the characters of Rabelais in France, Till Eulenspiegel in Germany, Lazarillo de Tormes in Spain and Master Skelton in England. There is also a jest book ascribed to William Shakespeare, the contents of which appear to both inform and borrow from his plays. All of these early jestbooks corroborate both the rise in the literacy of the European populations and the general quest for leisure activities during the Renaissance in Europe. The practice of printers using jokes and cartoons as page fillers was also widely used in the broadsides and chapbooks of the 19th century and earlier. With the increase in literacy in the general population and the growth of the printing industry, these publications were the most common forms of printed material between the 16th and 19th centuries throughout Europe and North America. Along with reports of events, executions, ballads and verse, they also contained jokes. Only one of many broadsides archived in the Harvard library is described as "1706. Grinning made easy; or, Funny Dick's unrivalled collection of curious, comical, odd, droll, humorous, witty, whimsical, laughable, and eccentric jests, jokes, bulls, epigrams, &c. With many other descriptions of wit and humour." These cheap publications, ephemera intended for mass distribution, were read alone, read aloud, posted and discarded. There are many types of joke books in print today; a search on the internet provides a plethora of titles available for purchase. They can be read alone for solitary entertainment, or used to stock up on new jokes to entertain friends. Some people try to find a deeper meaning in jokes, as in "Plato and a Platypus Walk into a Bar... Understanding Philosophy Through Jokes".[note 3] However a deeper meaning is not necessary to appreciate their inherent entertainment value. Magazines frequently use jokes and cartoons as filler for the printed page. Reader's Digest closes out many articles with an (unrelated) joke at the bottom of the article. The New Yorker was first published in 1925 with the stated goal of being a "sophisticated humour magazine" and is still known for its cartoons. Telling jokes Telling a joke is a cooperative effort; it requires that the teller and the audience mutually agree in one form or another to understand the narrative which follows as a joke. In a study of conversation analysis, the sociologist Harvey Sacks describes in detail the sequential organisation in the telling of a single joke. "This telling is composed, as for stories, of three serially ordered and adjacently placed types of sequences … the preface [framing], the telling, and the response sequences." Folklorists expand this to include the context of the joking. Who is telling what jokes to whom? And why is he telling them when? The context of the joke-telling in turn leads into a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who engage in institutionalised banter and joking. Framing is done with a (frequently formulaic) expression which keys the audience in to expect a joke. "Have you heard the one…", "Reminds me of a joke I heard…", "So, a lawyer and a doctor…"; these conversational markers are just a few examples of linguistic frames used to start a joke. Regardless of the frame used, it creates a social space and clear boundaries around the narrative which follows. Audience response to this initial frame can be acknowledgement and anticipation of the joke to follow. It can also be a dismissal, as in "this is no joking matter" or "this is no time for jokes". The performance frame serves to label joke-telling as a culturally marked form of communication. Both the performer and audience understand it to be set apart from the "real" world. "An elephant walks into a bar…"; a person sufficiently familiar with both the English language and the way jokes are told automatically understands that such a compressed and formulaic story, being told with no substantiating details, and placing an unlikely combination of characters into an unlikely setting and involving them in an unrealistic plot, is the start of a joke, and the story that follows is not meant to be taken at face value (i.e. it is non-bona-fide communication). The framing itself invokes a play mode; if the audience is unable or unwilling to move into play, then nothing will seem funny. Following its linguistic framing the joke, in the form of a story, can be told. It is not required to be verbatim text like other forms of oral literature such as riddles and proverbs. The teller can and does modify the text of the joke, depending both on memory and the present audience. The important characteristic is that the narrative is succinct, containing only those details which lead directly to an understanding and decoding of the punchline. This requires that it support the same (or similar) divergent scripts which are to be embodied in the punchline. The punchline is intended to make the audience laugh. A linguistic interpretation of this punchline/response is elucidated by Victor Raskin in his Script-based Semantic Theory of Humour. Humour is evoked when a trigger contained in the punchline causes the audience to abruptly shift its understanding of the story from the primary (or more obvious) interpretation to a secondary, opposing interpretation. "The punchline is the pivot on which the joke text turns as it signals the shift between the [semantic] scripts necessary to interpret [re-interpret] the joke text." To produce the humour in the verbal joke, the two interpretations (i.e. scripts) need to both be compatible with the joke text and opposite or incompatible with each other. Thomas R. Shultz, a psychologist, independently expands Raskin's linguistic theory to include "two stages of incongruity: perception and resolution." He explains that "… incongruity alone is insufficient to account for the structure of humour. […] Within this framework, humour appreciation is conceptualized as a biphasic sequence involving first the discovery of incongruity followed by a resolution of the incongruity." In the case of a joke, that resolution generates laughter. This is the point at which the field of neurolinguistics offers some insight into the cognitive processing involved in this abrupt laughter at the punchline. Studies by the cognitive science researchers Coulson and Kutas directly address the theory of script switching articulated by Raskin in their work. The article "Getting it: Human event-related brain response to jokes in good and poor comprehenders" measures brain activity in response to reading jokes. Additional studies by others in the field support more generally the theory of two-stage processing of humour, as evidenced in the longer processing time they require. In the related field of neuroscience, it has been shown that the expression of laughter is caused by two partially independent neuronal pathways: an "involuntary" or "emotionally driven" system and a "voluntary" system. This study adds credence to the common experience when exposed to an off-colour joke; a laugh is followed in the next breath by a disclaimer: "Oh, that's bad…" Here the multiple steps in cognition are clearly evident in the stepped response, the perception being processed just a breath faster than the resolution of the moral/ethical content in the joke. Expected response to a joke is laughter. The joke teller hopes the audience "gets it" and is entertained. This leads to the premise that a joke is actually an "understanding test" between individuals and groups. If the listeners do not get the joke, they are not understanding the two scripts which are contained in the narrative as they were intended. Or they do "get it" and do not laugh; it might be too obscene, too gross or too dumb for the current audience. A woman might respond differently to a joke told by a male colleague around the water cooler than she would to the same joke overheard in a women's lavatory. A joke involving toilet humour may be funnier told on the playground at elementary school than on a college campus. The same joke will elicit different responses in different settings. The punchline in the joke remains the same, however, it is more or less appropriate depending on the current context. The context explores the specific social situation in which joking occurs. The narrator automatically modifies the text of the joke to be acceptable to different audiences, while at the same time supporting the same divergent scripts in the punchline. The vocabulary used in telling the same joke at a university fraternity party and to one's grandmother might well vary. In each situation, it is important to identify both the narrator and the audience as well as their relationship with each other. This varies to reflect the complexities of a matrix of different social factors: age, sex, race, ethnicity, kinship, political views, religion, power relationships, etc. When all the potential combinations of such factors between the narrator and the audience are considered, then a single joke can take on infinite shades of meaning for each unique social setting. The context, however, should not be confused with the function of the joking. "Function is essentially an abstraction made on the basis of a number of contexts". In one long-term observation of men coming off the late shift at a local café, joking with the waitresses was used to ascertain sexual availability for the evening. Different types of jokes, going from general to topical into explicitly sexual humour signalled openness on the part of the waitress for a connection. This study describes how jokes and joking are used to communicate much more than just good humour. That is a single example of the function of joking in a social setting, but there are others. Sometimes jokes are used simply to get to know someone better. What makes them laugh, what do they find funny? Jokes concerning politics, religion or sexual topics can be used effectively to gauge the attitude of the audience to any one of these topics. They can also be used as a marker of group identity, signalling either inclusion or exclusion for the group. Among pre-adolescents, "dirty" jokes allow them to share information about their changing bodies. And sometimes joking is just simple entertainment for a group of friends. Relationships The context of joking in turn leads to a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who take part in institutionalised banter and joking. These relationships can be either one-way or a mutual back and forth between partners. The joking relationship is defined as a peculiar combination of friendliness and antagonism. The behaviour is such that in any other social context it would express and arouse hostility; but it is not meant seriously and must not be taken seriously. There is a pretence of hostility along with a real friendliness. To put it in another way, the relationship is one of permitted disrespect. Joking relationships were first described by anthropologists within kinship groups in Africa. But they have since been identified in cultures around the world, where jokes and joking are used to mark and reinforce appropriate boundaries of a relationship. Electronic The advent of electronic communications at the end of the 20th century introduced new traditions into jokes. A verbal joke or cartoon is emailed to a friend or posted on a bulletin board; reactions include a replied email with a :-) or LOL, or a forward on to further recipients. Interaction is limited to the computer screen and for the most part solitary. While preserving the text of a joke, both context and variants are lost in internet joking; for the most part, emailed jokes are passed along verbatim. The framing of the joke frequently occurs in the subject line: "RE: laugh for the day" or something similar. The forward of an email joke can increase the number of recipients exponentially. Internet joking forces a re-evaluation of social spaces and social groups. They are no longer only defined by physical presence and locality, they also exist in the connectivity in cyberspace. "The computer networks appear to make possible communities that, although physically dispersed, display attributes of the direct, unconstrained, unofficial exchanges folklorists typically concern themselves with". This is particularly evident in the spread of topical jokes, "that genre of lore in which whole crops of jokes spring up seemingly overnight around some sensational event … flourish briefly and then disappear, as the mass media move on to fresh maimings and new collective tragedies". This correlates with the new understanding of the internet as an "active folkloric space" with evolving social and cultural forces and clearly identifiable performers and audiences. A study by the folklorist Bill Ellis documented how an evolving cycle was circulated over the internet. By accessing message boards that specialised in humour immediately following the 9/11 disaster, Ellis was able to observe in real-time both the topical jokes being posted electronically and responses to the jokes. Previous folklore research has been limited to collecting and documenting successful jokes, and only after they had emerged and come to folklorists' attention. Now, an Internet-enhanced collection creates a time machine, as it were, where we can observe what happens in the period before the risible moment, when attempts at humour are unsuccessful Access to archived message boards also enables us to track the development of a single joke thread in the context of a more complicated virtual conversation. Joke cycles A joke cycle is a collection of jokes about a single target or situation which displays consistent narrative structure and type of humour. Some well-known cycles are elephant jokes using nonsense humour, dead baby jokes incorporating black humour, and light bulb jokes, which describe all kinds of operational stupidity. Joke cycles can centre on ethnic groups, professions (viola jokes), catastrophes, settings (…walks into a bar), absurd characters (wind-up dolls), or logical mechanisms which generate the humour (knock-knock jokes). A joke can be reused in different joke cycles; an example of this is the same Head & Shoulders joke refitted to the tragedies of Vic Morrow, Admiral Mountbatten and the crew of the Challenger space shuttle.[note 4] These cycles seem to appear spontaneously, spread rapidly across countries and borders only to dissipate after some time. Folklorists and others have studied individual joke cycles in an attempt to understand their function and significance within the culture. Joke cycles circulated in the recent past include: As with the 9/11 disaster discussed above, cycles attach themselves to celebrities or national catastrophes such as the death of Diana, Princess of Wales, the death of Michael Jackson, and the Space Shuttle Challenger disaster. These cycles arise regularly as a response to terrible unexpected events which command the national news. An in-depth analysis of the Challenger joke cycle documents a change in the type of humour circulated following the disaster, from February to March 1986. "It shows that the jokes appeared in distinct 'waves', the first responding to the disaster with clever wordplay and the second playing with grim and troubling images associated with the event…The primary social function of disaster jokes appears to be to provide closure to an event that provoked communal grieving, by signalling that it was time to move on and pay attention to more immediate concerns". The sociologist Christie Davies has written extensively on ethnic jokes told in countries around the world. In ethnic jokes he finds that the "stupid" ethnic target in the joke is no stranger to the culture, but rather a peripheral social group (geographic, economic, cultural, linguistic) well known to the joke tellers. So Americans tell jokes about Polacks and Italians, Germans tell jokes about Ostfriesens, and the English tell jokes about the Irish. In a review of Davies' theories it is said that "For Davies, [ethnic] jokes are more about how joke tellers imagine themselves than about how they imagine those others who serve as their putative targets…The jokes thus serve to center one in the world – to remind people of their place and to reassure them that they are in it." A third category of joke cycles identifies absurd characters as the butt: for example the grape, the dead baby or the elephant. Beginning in the 1960s, social and cultural interpretations of these joke cycles, spearheaded by the folklorist Alan Dundes, began to appear in academic journals. Dead baby jokes are posited to reflect societal changes and guilt caused by widespread use of contraception and abortion beginning in the 1960s.[note 5] Elephant jokes have been interpreted variously as stand-ins for American blacks during the Civil Rights Era or as an "image of something large and wild abroad in the land captur[ing] the sense of counterculture" of the sixties. These interpretations strive for a cultural understanding of the themes of these jokes which go beyond the simple collection and documentation undertaken previously by folklorists and ethnologists. Classification systems As folktales and other types of oral literature became collectables throughout Europe in the 19th century (Brothers Grimm et al.), folklorists and anthropologists of the time needed a system to organise these items. The Aarne–Thompson classification system was first published in 1910 by Antti Aarne, and later expanded by Stith Thompson to become the most renowned classification system for European folktales and other types of oral literature. Its final section addresses anecdotes and jokes, listing traditional humorous tales ordered by their protagonist; "This section of the Index is essentially a classification of the older European jests, or merry tales – humorous stories characterized by short, fairly simple plots. …" Due to its focus on older tale types and obsolete actors (e.g., numbskull), the Aarne–Thompson Index does not provide much help in identifying and classifying the modern joke. A more granular classification system used widely by folklorists and cultural anthropologists is the Thompson Motif Index, which separates tales into their individual story elements. This system enables jokes to be classified according to individual motifs included in the narrative: actors, items and incidents. It does not provide a system to classify the text by more than one element at a time while at the same time making it theoretically possible to classify the same text under multiple motifs. The Thompson Motif Index has spawned further specialised motif indices, each of which focuses on a single aspect of one subset of jokes. A sampling of just a few of these specialised indices have been listed under other motif indices. Here one can select an index for medieval Spanish folk narratives, another index for linguistic verbal jokes, and a third one for sexual humour. To assist the researcher with this increasingly confusing situation, there are also multiple bibliographies of indices as well as a how-to guide on creating your own index. Several difficulties have been identified with these systems of identifying oral narratives according to either tale types or story elements. A first major problem is their hierarchical organisation; one element of the narrative is selected as the major element, while all other parts are arrayed subordinate to this. A second problem with these systems is that the listed motifs are not qualitatively equal; actors, items and incidents are all considered side-by-side. And because incidents will always have at least one actor and usually have an item, most narratives can be ordered under multiple headings. This leads to confusion about both where to order an item and where to find it. A third significant problem is that the "excessive prudery" common in the middle of the 20th century means that obscene, sexual and scatological elements were regularly ignored in many of the indices. The folklorist Robert Georges has summed up the concerns with these existing classification systems: …Yet what the multiplicity and variety of sets and subsets reveal is that folklore [jokes] not only takes many forms, but that it is also multifaceted, with purpose, use, structure, content, style, and function all being relevant and important. Any one or combination of these multiple and varied aspects of a folklore example [such as jokes] might emerge as dominant in a specific situation or for a particular inquiry. It has proven difficult to organise all different elements of a joke into a multi-dimensional classification system which could be of real value in the study and evaluation of this (primarily oral) complex narrative form. The General Theory of Verbal Humour or GTVH, developed by the linguists Victor Raskin and Salvatore Attardo, attempts to do exactly this. This classification system was developed specifically for jokes and later expanded to include longer types of humorous narratives. Six different aspects of the narrative, labelled Knowledge Resources or KRs, can be evaluated largely independently of each other, and then combined into a concatenated classification label. These six KRs of the joke structure include: As development of the GTVH progressed, a hierarchy of the KRs was established to partially restrict the options for lower-level KRs depending on the KRs defined above them. For example, a lightbulb joke (SI) will always be in the form of a riddle (NS). Outside of these restrictions, the KRs can create a multitude of combinations, enabling a researcher to select jokes for analysis which contain only one or two defined KRs. It also allows for an evaluation of the similarity or dissimilarity of jokes depending on the similarity of their labels. "The GTVH presents itself as a mechanism … of generating [or describing] an infinite number of jokes by combining the various values that each parameter can take. … Descriptively, to analyze a joke in the GTVH consists of listing the values of the 6 KRs (with the caveat that TA and LM may be empty)." This classification system provides a functional multi-dimensional label for any joke, and indeed any verbal humour. Joke and humour research Many academic disciplines lay claim to the study of jokes (and other forms of humour) as within their purview. Fortunately, there are enough jokes, good, bad and worse, to go around. The studies of jokes from each of the interested disciplines bring to mind the tale of the blind men and an elephant where the observations, although accurate reflections of their own competent methodological inquiry, frequently fail to grasp the beast in its entirety. This attests to the joke as a traditional narrative form which is indeed complex, concise and complete in and of itself. It requires a "multidisciplinary, interdisciplinary, and cross-disciplinary field of inquiry" to truly appreciate these nuggets of cultural insight.[note 6] Sigmund Freud was one of the first modern scholars to recognise jokes as an important object of investigation. In his 1905 study Jokes and their Relation to the Unconscious Freud describes the social nature of humour and illustrates his text with many examples of contemporary Viennese jokes. His work is particularly noteworthy in this context because Freud distinguishes in his writings between jokes, humour and the comic. These are distinctions which become easily blurred in many subsequent studies where everything funny tends to be gathered under the umbrella term of "humour", making for a much more diffuse discussion. Since the publication of Freud's study, psychologists have continued to explore humour and jokes in their quest to explain, predict and control an individual's "sense of humour". Why do people laugh? Why do people find something funny? Can jokes predict character, or vice versa, can character predict the jokes an individual laughs at? What is a "sense of humour"? A current review of the popular magazine Psychology Today lists over 200 articles discussing various aspects of humour; in psychological jargon, the subject area has become both an emotion to measure and a tool to use in diagnostics and treatment. A new psychological assessment tool, the Values in Action Inventory developed by the American psychologists Christopher Peterson and Martin Seligman includes humour (and playfulness) as one of the core character strengths of an individual. As such, it could be a good predictor of life satisfaction. For psychologists, it would be useful to measure both how much of this strength an individual has and how it can be measurably increased. A 2007 survey of existing tools to measure humour identified more than 60 psychological measurement instruments. These measurement tools use many different approaches to quantify humour along with its related states and traits. There are tools to measure an individual's physical response by their smile; the Facial Action Coding System (FACS) is one of several tools used to identify any one of multiple types of smiles. Or the laugh can be measured to calculate the funniness response of an individual; multiple types of laughter have been identified. It must be stressed here that both smiles and laughter are not always a response to something funny. In trying to develop a measurement tool, most systems use "jokes and cartoons" as their test materials. However, because no two tools use the same jokes, and across languages this would not be feasible, how does one determine that the assessment objects are comparable? Moving on, whom does one ask to rate the sense of humour of an individual? Does one ask the person themselves, an impartial observer, or their family, friends and colleagues? Furthermore, has the current mood of the test subjects been considered; someone with a recent death in the family might not be much prone to laughter. Given the plethora of variants revealed by even a superficial glance at the problem, it becomes evident that these paths of scientific inquiry are mined with problematic pitfalls and questionable solutions. The psychologist Willibald Ruch [de] has been very active in the research of humour. He has collaborated with the linguists Raskin and Attardo on their General Theory of Verbal Humour (GTVH) classification system. Their goal is to empirically test both the six autonomous classification types (KRs) and the hierarchical ordering of these KRs. Advancement in this direction would be a win-win for both fields of study; linguistics would have empirical verification of this multi-dimensional classification system for jokes, and psychology would have a standardised joke classification with which they could develop verifiably comparable measurement tools. "The linguistics of humor has made gigantic strides forward in the last decade and a half and replaced the psychology of humor as the most advanced theoretical approach to the study of this important and universal human faculty." This recent statement by one noted linguist and humour researcher describes, from his perspective, contemporary linguistic humour research. Linguists study words, how words are strung together to build sentences, how sentences create meaning which can be communicated from one individual to another, and how our interaction with each other using words creates discourse. Jokes have been defined above as oral narratives in which words and sentences are engineered to build toward a punchline. The linguist's question is: what exactly makes the punchline funny? This question focuses on how the words used in the punchline create humour, in contrast to the psychologist's concern (see above) with the audience's response to the punchline. The assessment of humour by psychologists "is made from the individual's perspective; e.g. the phenomenon associated with responding to or creating humor and not a description of humor itself." Linguistics, on the other hand, endeavours to provide a precise description of what makes a text funny. Two major new linguistic theories have been developed and tested within the last decades. The first was advanced by Victor Raskin in "Semantic Mechanisms of Humor", published 1985. While being a variant on the more general concepts of the incongruity theory of humour, it is the first theory to identify its approach as exclusively linguistic. The Script-based Semantic Theory of Humour (SSTH) begins by identifying two linguistic conditions which make a text funny. It then goes on to identify the mechanisms involved in creating the punchline. This theory established the semantic/pragmatic foundation of humour as well as the humour competence of speakers.[note 7] Several years later the SSTH was incorporated into a more expansive theory of jokes put forth by Raskin and his colleague Salvatore Attardo. In the General Theory of Verbal Humour, the SSTH was relabelled as a Logical Mechanism (LM) (referring to the mechanism which connects the different linguistic scripts in the joke) and added to five other independent Knowledge Resources (KR). Together these six KRs could now function as a multi-dimensional descriptive label for any piece of humorous text. Linguistics has developed further methodological tools which can be applied to jokes: discourse analysis and conversation analysis of joking. Both of these subspecialties within the field focus on "naturally occurring" language use, i.e. the analysis of real (usually recorded) conversations. One of these studies has already been discussed above, where Harvey Sacks describes in detail the sequential organisation in telling a single joke. Discourse analysis emphasises the entire context of social joking, the social interaction which cradles the words. Folklore and cultural anthropology have perhaps the strongest claims on jokes as belonging to their bailiwick. Jokes remain one of the few remaining forms of traditional folk literature transmitted orally in western cultures. Identified as one of the "simple forms" of oral literature by André Jolles in 1930, they have been collected and studied since there were folklorists and anthropologists abroad in the lands. As a genre they were important enough at the beginning of the 20th century to be included under their own heading in the Aarne–Thompson index first published in 1910: Anecdotes and jokes. Beginning in the 1960s, cultural researchers began to expand their role from collectors and archivists of "folk ideas" to a more active role of interpreters of cultural artefacts. One of the foremost scholars active during this transitional time was the folklorist Alan Dundes. He started asking questions of tradition and transmission with the key observation that "No piece of folklore continues to be transmitted unless it means something, even if neither the speaker nor the audience can articulate what that meaning might be." In the context of jokes, this then becomes the basis for further research. Why is the joke told right now? Only in this expanded perspective is an understanding of its meaning to the participants possible. This questioning resulted in a blossoming of monographs to explore the significance of many joke cycles. What is so funny about absurd nonsense elephant jokes? Why make light of dead babies? In an article on contemporary German jokes about Auschwitz and the Holocaust, Dundes justifies this research: Whether one finds Auschwitz jokes funny or not is not an issue. This material exists and should be recorded. Jokes are always an important barometer of the attitudes of a group. The jokes exist and they obviously must fill some psychic need for those individuals who tell them and those who listen to them. A stimulating generation of new humour theories flourishes like mushrooms in the undergrowth: Elliott Oring's theoretical discussions on "appropriate ambiguity" and Amy Carrell's hypothesis of an "audience-based theory of verbal humor (1993)" to name just a few. In his book Humor and Laughter: An Anthropological Approach, the anthropologist Mahadev Apte presents a solid case for his own academic perspective. "Two axioms underlie my discussion, namely, that humor is by and large culture based and that humor can be a major conceptual and methodological tool for gaining insights into cultural systems." Apte goes on to call for legitimising the field of humour research as "humorology"; this would be a field of study incorporating an interdisciplinary character of humour studies. While the label "humorology" has yet to become a household word, great strides are being made in the international recognition of this interdisciplinary field of research. The International Society for Humor Studies was founded in 1989 with the stated purpose to "promote, stimulate and encourage the interdisciplinary study of humour; to support and cooperate with local, national, and international organizations having similar purposes; to organize and arrange meetings; and to issue and encourage publications concerning the purpose of the society". It also publishes Humor: International Journal of Humor Research and holds yearly conferences to promote and inform its speciality. In 1872, Charles Darwin published one of the first "comprehensive and in many ways remarkably accurate description of laughter in terms of respiration, vocalization, facial action and gesture and posture" (Laughter) in The Expression of the Emotions in Man and Animals. In this early study Darwin raises further questions about who laughs and why they laugh; the myriad responses since then illustrate the complexities of this behaviour. To understand laughter in humans and other primates, the science of gelotology (from the Greek gelos, meaning laughter) has been established; it is the study of laughter and its effects on the body from both a psychological and physiological perspective. While jokes can provoke laughter, laughter cannot be used as a one-to-one marker of jokes because there are multiple stimuli to laughter, humour being just one of them. The other six causes of laughter listed are social context, ignorance, anxiety, derision, acting apology, and tickling. As such, the study of laughter is a secondary albeit entertaining perspective in an understanding of jokes. Computational humour is a new field of study which uses computers to model humour; it bridges the disciplines of computational linguistics and artificial intelligence. A primary ambition of this field is to develop computer programs which can both generate a joke and recognise a text snippet as a joke. Early programming attempts have dealt almost exclusively with punning because this lends itself to simple straightforward rules. These primitive programs display no intelligence; instead, they work off a template with a finite set of pre-defined punning options upon which to build. More sophisticated computer joke programs have yet to be developed. Based on our understanding of the SSTH / GTVH humour theories, it is easy to see why. The linguistic scripts (a.k.a. frames) referenced in these theories include, for any given word, a "large chunk of semantic information surrounding the word and evoked by it [...] a cognitive structure internalized by the native speaker". These scripts extend much further than the lexical definition of a word; they contain the speaker's complete knowledge of the concept as it exists in his world. As insentient machines, computers lack the encyclopaedic scripts which humans gain through life experience. They also lack the ability to gather the experiences needed to build wide-ranging semantic scripts and understand language in a broader context, a context that any child picks up in daily interaction with his environment. Further development in this field must wait until computational linguists have succeeded in programming a computer with an ontological semantic natural language processing system. It is only "the most complex linguistic structures [which] can serve any formal and/or computational treatment of humor well". Toy systems (i.e. dummy punning programs) are completely inadequate to the task. Despite the fact that the field of computational humour is small and underdeveloped, it is encouraging to note the many interdisciplinary efforts which are currently underway. See also Notes References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/OpenAI#cite_note-10] | [TOKENS: 8773] |
Contents OpenAI OpenAI is an American artificial intelligence research organization comprising both a non-profit foundation and a controlled for-profit public benefit corporation (PBC), headquartered in San Francisco. It aims to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". OpenAI is widely recognized for its development of the GPT family of large language models, the DALL-E series of text-to-image models, and the Sora series of text-to-video models, which have influenced industry research and commercial applications. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI. The organization was founded in 2015 in Delaware but evolved a complex corporate structure. As of October 2025, following restructuring approved by California and Delaware regulators, the non-profit OpenAI Foundation holds 26% of the for-profit OpenAI Group PBC, with Microsoft holding 27% and employees/other investors holding 47%. Under its governance arrangements, the OpenAI Foundation holds the authority to appoint the board of the for-profit OpenAI Group PBC, a mechanism designed to align the entity’s strategic direction with the Foundation’s charter. Microsoft previously invested over $13 billion into OpenAI, and provides Azure cloud computing resources. In October 2025, OpenAI conducted a $6.6 billion share sale that valued the company at $500 billion. In 2023 and 2024, OpenAI faced multiple lawsuits for alleged copyright infringement against authors and media companies whose work was used to train some of OpenAI's products. In November 2023, OpenAI's board removed Sam Altman as CEO, citing a lack of confidence in him, but reinstated him five days later following a reconstruction of the board. Throughout 2024, roughly half of then-employed AI safety researchers left OpenAI, citing the company's prominent role in an industry-wide problem. Founding In December 2015, OpenAI was founded as a not for profit organization by Sam Altman, Elon Musk, Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk as the co-chairs. A total of $1 billion in capital was pledged by Sam Altman, Greg Brockman, Elon Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), and Infosys. However, the actual capital collected significantly lagged pledges. According to company disclosures, only $130 million had been received by 2019. In its founding charter, OpenAI stated an intention to collaborate openly with other institutions by making certain patents and research publicly available, but later restricted access to its most capable models, citing competitive and safety concerns. OpenAI was initially run from Brockman's living room. It was later headquartered at the Pioneer Building in the Mission District, San Francisco. According to OpenAI's charter, its founding mission is "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity." Musk and Altman stated in 2015 that they were partly motivated by concerns about AI safety and existential risk from artificial general intelligence. OpenAI stated that "it's hard to fathom how much human-level AI could benefit society", and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly". The startup also wrote that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible", and that "because of AI's surprising history, it's hard to predict when human-level AI might come within reach. When it does, it'll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest." Co-chair Sam Altman expected a decades-long project that eventually surpasses human intelligence. Brockman met with Yoshua Bengio, one of the "founding fathers" of deep learning, and drew up a list of great AI researchers. Brockman was able to hire nine of them as the first employees in December 2015. OpenAI did not pay AI researchers salaries comparable to those of Facebook or Google. It also did not pay stock options which AI researchers typically get. Nevertheless, OpenAI spent $7 million on its first 52 employees in 2016. OpenAI's potential and mission drew these researchers to the firm; a Google employee said he was willing to leave Google for OpenAI "partly because of the very strong group of people and, to a very large extent, because of its mission." OpenAI co-founder Wojciech Zaremba stated that he turned down "borderline crazy" offers of two to three times his market value to join OpenAI instead. In April 2016, OpenAI released a public beta of "OpenAI Gym", its platform for reinforcement learning research. Nvidia gifted its first DGX-1 supercomputer to OpenAI in August 2016 to help it train larger and more complex AI models with the capability of reducing processing time from six days to two hours. In December 2016, OpenAI released "Universe", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites, and other applications. Corporate structure In 2019, OpenAI transitioned from non-profit to "capped" for-profit, with the profit being capped at 100 times any investment. According to OpenAI, the capped-profit model allows OpenAI Global, LLC to legally attract investment from venture funds and, in addition, to grant employees stakes in the company. Many top researchers work for Google Brain, DeepMind, or Facebook, which offer equity that a nonprofit would be unable to match. Before the transition, OpenAI was legally required to publicly disclose the compensation of its top employees. The company then distributed equity to its employees and partnered with Microsoft, announcing an investment package of $1 billion into the company. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft. OpenAI Global, LLC then announced its intention to commercially license its technologies. It planned to spend $1 billion "within five years, and possibly much faster". Altman stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need "more capital than any non-profit has ever raised" to achieve artificial general intelligence. The nonprofit, OpenAI, Inc., is the sole controlling shareholder of OpenAI Global, LLC, which, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI, Inc.'s nonprofit charter. A majority of OpenAI, Inc.'s board is barred from having financial stakes in OpenAI Global, LLC. In addition, minority members with a stake in OpenAI Global, LLC are barred from certain votes due to conflict of interest. Some researchers have argued that OpenAI Global, LLC's switch to for-profit status is inconsistent with OpenAI's claims to be "democratizing" AI. On February 29, 2024, Elon Musk filed a lawsuit against OpenAI and CEO Sam Altman, accusing them of shifting focus from public benefit to profit maximization—a case OpenAI dismissed as "incoherent" and "frivolous," though Musk later revived legal action against Altman and others in August. On April 9, 2024, OpenAI countersued Musk in federal court, alleging that he had engaged in "bad-faith tactics" to slow the company's progress and seize its innovations for his personal benefit. OpenAI also argued that Musk had previously supported the creation of a for-profit structure and had expressed interest in controlling OpenAI himself. The countersuit seeks damages and legal measures to prevent further alleged interference. On February 10, 2025, a consortium of investors led by Elon Musk submitted a $97.4 billion unsolicited bid to buy the nonprofit that controls OpenAI, declaring willingness to match or exceed any better offer. The offer was rejected on 14 February 2025, with OpenAI stating that it was not for sale, but the offer complicated Altman's restructuring plan by suggesting a lower bar for how much the nonprofit should be valued. OpenAI, Inc. was originally designed as a nonprofit in order to ensure that AGI "benefits all of humanity" rather than "the private gain of any person". In 2019, it created OpenAI Global, LLC, a capped-profit subsidiary controlled by the nonprofit. In December 2024, OpenAI proposed a restructuring plan to convert the capped-profit into a Delaware-based public benefit corporation (PBC), and to release it from the control of the nonprofit. The nonprofit would sell its control and other assets, getting equity in return, and would use it to fund and pursue separate charitable projects, including in science and education. OpenAI's leadership described the change as necessary to secure additional investments, and claimed that the nonprofit's founding mission to ensure AGI "benefits all of humanity" would be better fulfilled. The plan has been criticized by former employees. A legal letter named "Not For Private Gain" asked the attorneys general of California and Delaware to intervene, stating that the restructuring is illegal and would remove governance safeguards from the nonprofit and the attorneys general. The letter argues that OpenAI's complex structure was deliberately designed to remain accountable to its mission, without the conflicting pressure of maximizing profits. It contends that the nonprofit is best positioned to advance its mission of ensuring AGI benefits all of humanity by continuing to control OpenAI Global, LLC, whatever the amount of equity that it could get in exchange. PBCs can choose how they balance their mission with profit-making. Controlling shareholders have a large influence on how closely a PBC sticks to its mission. On October 28, 2025, OpenAI announced that it had adopted the new PBC corporate structure after receiving approval from the attorneys general of California and Delaware. Under the new structure, OpenAI's for-profit branch became a public benefit corporation known as OpenAI Group PBC, while the non-profit was renamed to the OpenAI Foundation. The OpenAI Foundation holds a 26% stake in the PBC, while Microsoft holds a 27% stake and the remaining 47% is owned by employees and other investors. All members of the OpenAI Group PBC board of directors will be appointed by the OpenAI Foundation, which can remove them at any time. Members of the Foundation's board will also serve on the for-profit board. The new structure allows the for-profit PBC to raise investor funds like most traditional tech companies, including through an initial public offering, which Altman claimed was the most likely path forward. In January 2023, OpenAI Global, LLC was in talks for funding that would value the company at $29 billion, double its 2021 value. On January 23, 2023, Microsoft announced a new US$10 billion investment in OpenAI Global, LLC over multiple years, partially needed to use Microsoft's cloud-computing service Azure. From September to December, 2023, Microsoft rebranded all variants of its Copilot to Microsoft Copilot, and they added MS-Copilot to many installations of Windows and released Microsoft Copilot mobile apps. Following OpenAI's 2025 restructuring, Microsoft owns a 27% stake in the for-profit OpenAI Group PBC, valued at $135 billion. In a deal announced the same day, OpenAI agreed to purchase $250 billion of Azure services, with Microsoft ceding their right of first refusal over OpenAI's future cloud computing purchases. As part of the deal, OpenAI will continue to share 20% of its revenue with Microsoft until it achieves AGI, which must now be verified by an independent panel of experts. The deal also loosened restrictions on both companies working with third parties, allowing Microsoft to pursue AGI independently and allowing OpenAI to develop products with other companies. In 2017, OpenAI spent $7.9 million, a quarter of its functional expenses, on cloud computing alone. In comparison, DeepMind's total expenses in 2017 were $442 million. In the summer of 2018, training OpenAI's Dota 2 bots required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks. In October 2024, OpenAI completed a $6.6 billion capital raise with a $157 billion valuation including investments from Microsoft, Nvidia, and SoftBank. On January 21, 2025, Donald Trump announced The Stargate Project, a joint venture between OpenAI, Oracle, SoftBank and MGX to build an AI infrastructure system in conjunction with the US government. The project takes its name from OpenAI's existing "Stargate" supercomputer project and is estimated to cost $500 billion. The partners planned to fund the project over the next four years. In July, the United States Department of Defense announced that OpenAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and xAI. In the same month, the company made a deal with the UK Government to use ChatGPT and other AI tools in public services. OpenAI subsequently began a $50 million fund to support nonprofit and community organizations. In April 2025, OpenAI raised $40 billion at a $300 billion post-money valuation, which was the highest-value private technology deal in history. The financing round was led by SoftBank, with other participants including Microsoft, Coatue, Altimeter and Thrive. In July 2025, the company reported annualized revenue of $12 billion. This was an increase from $3.7 billion in 2024, which was driven by ChatGPT subscriptions, which reached 20 million paid subscribers by April 2025, up from 15.5 million at the end of 2024, alongside a rapidly expanding enterprise customer base that grew to five million business users. The company’s cash burn remains high because of the intensive computational costs required to train and operate large language models. It projects an $8 billion operating loss in 2025. OpenAI reports revised long-term spending projections totaling approximately $115 billion through 2029, with annual expenditures projected to escalate significantly, reaching $17 billion in 2026, $35 billion in 2027, and $45 billion in 2028. These expenditures are primarily allocated toward expanding compute infrastructure, developing proprietary AI chips, constructing data centers, and funding intensive model training programs, with more than half of the spending through the end of the decade expected to support research-intensive compute for model training and development. The company's financial strategy prioritizes market expansion and technological advancement over near-term profitability, with OpenAI targeting cash-flow-positive operations by 2029 and projecting revenue of approximately $200 billion by 2030. This aggressive spending trajectory underscores both the enormous capital requirements of scaling cutting-edge AI technology and OpenAI's commitment to maintaining its position as a leader in the artificial intelligence industry. In October 2025, OpenAI completed an employee share sale of up to $10 billion to existing investors which valued the company at $500 billion. The deal values OpenAI as the most valuable privately owned company in the world—surpassing SpaceX as the world's most valuable private company. On November 17, 2023, Sam Altman was removed as CEO when its board of directors (composed of Helen Toner, Ilya Sutskever, Adam D'Angelo and Tasha McCauley) cited a lack of confidence in him. Chief Technology Officer Mira Murati took over as interim CEO. Greg Brockman, the president of OpenAI, was also removed as chairman of the board and resigned from the company's presidency shortly thereafter. Three senior OpenAI researchers subsequently resigned: director of research and GPT-4 lead Jakub Pachocki, head of AI risk Aleksander Mądry, and researcher Szymon Sidor. On November 18, 2023, there were reportedly talks of Altman returning as CEO amid pressure placed upon the board by investors such as Microsoft and Thrive Capital, who objected to Altman's departure. Although Altman himself spoke in favor of returning to OpenAI, he has since stated that he considered starting a new company and bringing former OpenAI employees with him if talks to reinstate him didn't work out. The board members agreed "in principle" to resign if Altman returned. On November 19, 2023, negotiations with Altman to return failed and Murati was replaced by Emmett Shear as interim CEO. The board initially contacted Anthropic CEO Dario Amodei (a former OpenAI executive) about replacing Altman, and proposed a merger of the two companies, but both offers were declined. On November 20, 2023, Microsoft CEO Satya Nadella announced Altman and Brockman would be joining Microsoft to lead a new advanced AI research team, but added that they were still committed to OpenAI despite recent events. Before the partnership with Microsoft was finalized, Altman gave the board another opportunity to negotiate with him. About 738 of OpenAI's 770 employees, including Murati and Sutskever, signed an open letter stating they would quit their jobs and join Microsoft if the board did not rehire Altman and then resign. This prompted OpenAI investors to consider legal action against the board as well. In response, OpenAI management sent an internal memo to employees stating that negotiations with Altman and the board had resumed and would take some time. On November 21, 2023, after continued negotiations, Altman and Brockman returned to the company in their prior roles along with a reconstructed board made up of new members Bret Taylor (as chairman) and Lawrence Summers, with D'Angelo remaining. According to subsequent reporting, shortly before Altman’s firing, some employees raised concerns to the board about how he had handled the safety implications of a recent internal AI capability discovery. On November 29, 2023, OpenAI announced that an anonymous Microsoft employee had joined the board as a non-voting member to observe the company's operations; Microsoft resigned from the board in July 2024. In February 2024, the Securities and Exchange Commission subpoenaed OpenAI's internal communication to determine if Altman's alleged lack of candor misled investors. In 2024, following the temporary removal of Sam Altman and his return, many employees gradually left OpenAI, including most of the original leadership team and a significant number of AI safety researchers. In August 2023, it was announced that OpenAI had acquired the New York-based start-up Global Illumination, a company that deploys AI to develop digital infrastructure and creative tools. In June 2024, OpenAI acquired Multi, a startup focused on remote collaboration. In March 2025, OpenAI reached a deal with CoreWeave to acquire $350 million worth of CoreWeave shares and access to AI infrastructure, in return for $11.9 billion paid over five years. Microsoft was already CoreWeave's biggest customer in 2024. Alongside their other business dealings, OpenAI and Microsoft were renegotiating the terms of their partnership to facilitate a potential future initial public offering by OpenAI, while ensuring Microsoft's continued access to advanced AI models. On May 21, OpenAI announced the $6.5 billion acquisition of io, an AI hardware start-up founded by former Apple designer Jony Ive in 2024. In September 2025, OpenAI agreed to acquire the product testing startup Statsig for $1.1 billion in an all-stock deal and appointed Statsig's founding CEO Vijaye Raji as OpenAI's chief technology officer of applications. The company also announced development of an AI-driven hiring service designed to rival LinkedIn. OpenAI acquired personal finance app Roi in October 2025. In October 2025, OpenAI acquired Software Applications Incorporated, the developer of Sky, a macOS-based natural language interface designed to operate across desktop applications. The Sky team joined OpenAI, and the company announced plans to integrate Sky’s capabilities into ChatGPT. In December 2025, it was announced OpenAI had agreed to acquire Neptune, an AI tooling startup that helps companies track and manage model training, for an undisclosed amount. In January 2026, it was announced OpenAI had acquired healthcare technology startup Torch for approximately $60 million. The acquisition followed the launch of OpenAI’s ChatGPT Health product and was intended to strengthen the company’s medical data and healthcare artificial intelligence capabilities. OpenAI has been criticized for outsourcing the annotation of data sets to Sama, a company based in San Francisco that employed workers in Kenya. These annotations were used to train an AI model to detect toxicity, which could then be used to moderate toxic content, notably from ChatGPT's training data and outputs. However, these pieces of text usually contained detailed descriptions of various types of violence, including sexual violence. The investigation uncovered that OpenAI began sending snippets of data to Sama as early as November 2021. The four Sama employees interviewed by Time described themselves as mentally scarred. OpenAI paid Sama $12.50 per hour of work, and Sama was redistributing the equivalent of between $1.32 and $2.00 per hour post-tax to its annotators. Sama's spokesperson said that the $12.50 was also covering other implicit costs, among which were infrastructure expenses, quality assurance and management. In 2024, OpenAI began collaborating with Broadcom to design a custom AI chip capable of both training and inference, targeted for mass production in 2026 and to be manufactured by TSMC on a 3 nm process node. This initiative intended to reduce OpenAI's dependence on Nvidia GPUs, which are costly and face high demand in the market. In January 2024, Arizona State University purchased ChatGPT Enterprise in OpenAI's first deal with a university. In June 2024, Apple Inc. signed a contract with OpenAI to integrate ChatGPT features into its products as part of its new Apple Intelligence initiative. In June 2025, OpenAI began renting Google Cloud's Tensor Processing Units (TPUs) to support ChatGPT and related services, marking its first meaningful use of non‑Nvidia AI chips. In September 2025, it was revealed that OpenAI signed a contract with Oracle to purchase $300 billion in computing power over the next five years. In September 2025, OpenAI and NVIDIA announced a memorandum of understanding that included a potential deployment of at least 10 gigawatts of NVIDIA systems and a $100 billion investment from NVIDIA in OpenAI. OpenAI expected the negotiations to be completed within weeks. As of January 2026, this has not been realized, and the two sides are rethinking the future of their partnership. In October 2025, OpenAI announced a multi-billion dollar deal with AMD. OpenAI committed to purchasing six gigawatts worth of AMD chips, starting with the MI450. OpenAI will have the option to buy up to 160 million shares of AMD, about 10% of the company, depending on development, performance and share price targets. In December 2025, Disney said it would make a $1 billion investment in OpenAI, and signed a three-year licensing deal that will let users generate videos using Sora—OpenAI's short-form AI video platform. More than 200 Disney, Marvel, Star Wars and Pixar characters will be available to OpenAI users. In early 2026, Amazon entered advanced discussions to invest up to $50 billion in OpenAI as part of a potential artificial intelligence partnership. Under the proposed agreement, OpenAI’s models could be integrated into Amazon’s digital assistant Alexa and other internal projects. OpenAI provides LLMs to the Artificial Intelligence Cyber Challenge and to the Advanced Research Projects Agency for Health. In October 2024, The Intercept revealed that OpenAI's tools are considered "essential" for AFRICOM's mission and included in an "Exception to Fair Opportunity" contractual agreement between the United States Department of Defense and Microsoft. In December 2024, OpenAI said it would partner with defense-tech company Anduril to build drone defense technologies for the United States and its allies. In 2025, OpenAI's Chief Product Officer, Kevin Weil, was commissioned lieutenant colonel in the U.S. Army to join Detachment 201 as senior advisor. In June 2025, the U.S. Department of Defense awarded OpenAI a $200 million one-year contract to develop AI tools for military and national security applications. OpenAI announced a new program, OpenAI for Government, to give federal, state, and local governments access to its models, including ChatGPT. Services In February 2019, GPT-2 was announced, which gained attention for its ability to generate human-like text. In 2020, OpenAI announced GPT-3, a language model trained on large internet datasets. GPT-3 is aimed at natural language answering questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named the API, would form the heart of its first commercial product. Eleven employees left OpenAI, mostly between December 2020 and January 2021, in order to establish Anthropic. In 2021, OpenAI introduced DALL-E, a specialized deep learning model adept at generating complex digital images from textual descriptions, utilizing a variant of the GPT-3 architecture. In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days. According to anonymous sources cited by Reuters in December 2022, OpenAI Global, LLC was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024. After ChatGPT was launched, Google announced a similar chatbot, Bard, amid internal concerns that ChatGPT could threaten Google’s position as a primary source of online information. On February 7, 2023, Microsoft announced that it was building AI technology based on the same foundation as ChatGPT into Microsoft Bing, Edge, Microsoft 365 and other products. On March 14, 2023, OpenAI released GPT-4, both as an API (with a waitlist) and as a feature of ChatGPT Plus. On November 6, 2023, OpenAI launched GPTs, allowing individuals to create customized versions of ChatGPT for specific purposes, further expanding the possibilities of AI applications across various industries. On November 14, 2023, OpenAI announced they temporarily suspended new sign-ups for ChatGPT Plus due to high demand. Access for newer subscribers re-opened a month later on December 13. In December 2024, the company launched the Sora model. It also launched OpenAI o1, an early reasoning model that was internally codenamed strawberry. Additionally, ChatGPT Pro—a $200/month subscription service offering unlimited o1 access and enhanced voice features—was introduced, and preliminary benchmark results for the upcoming OpenAI o3 models were shared. On January 23, 2025, OpenAI released Operator, an AI agent and web automation tool for accessing websites to execute goals defined by users. The feature was only available to Pro users in the United States. OpenAI released deep research agent, nine days later. It scored a 27% accuracy on the benchmark Humanity's Last Exam (HLE). Altman later stated GPT-4.5 would be the last model without full chain-of-thought reasoning. In July 2025, reports indicated that AI models by both OpenAI and Google DeepMind solved mathematics problems at the level of top-performing students in the International Mathematical Olympiad. OpenAI's large language model was able to achieve gold medal-level performance, reflecting significant progress in AI's reasoning abilities. On October 6, 2025, OpenAI unveiled its Agent Builder platform during the company's DevDay event. The platform includes a visual drag-and-drop interface that lets developers and businesses design, test, and deploy agentic workflows with limited coding. On October 21, 2025, OpenAI introduced ChatGPT Atlas, a browser integrating the ChatGPT assistant directly into web navigation, to compete with existing browsers such as Google Chrome and Apple Safari. On December 11, 2025, OpenAI announced GPT-5.2. This model will be better at creating spreadsheets, building presentations, perceiving images, writing code and understanding long context. On January 27, 2026, OpenAI introduced Prism, a LaTeX-native workspace meant to assist scientists to help with research and writing. The platform utilizes GPT-5.2 as a backend to automate the process of drafting for scientific papers, including features for managing citations, complex equation formatting, and real-time collaborative editing. In March 2023, the company was criticized for disclosing particularly few technical details about products like GPT-4, contradicting its initial commitment to openness and making it harder for independent researchers to replicate its work and develop safeguards. OpenAI cited competitiveness and safety concerns to justify this repudiation. OpenAI's former chief scientist Ilya Sutskever argued in 2023 that open-sourcing increasingly capable models was increasingly risky, and that the safety reasons for not open-sourcing the most potent AI models would become "obvious" in a few years. In September 2025, OpenAI published a study on how people use ChatGPT for everyday tasks. The study found that "non-work tasks" (according to an LLM-based classifier) account for more than 72 percent of all ChatGPT usage, with a minority of overall usage related to business productivity. In July 2023, OpenAI launched the superalignment project, aiming within four years to determine how to align future superintelligent systems. OpenAI promised to dedicate 20% of its computing resources to the project, although the team denied receiving anything close to 20%. OpenAI ended the project in May 2024 after its co-leaders Ilya Sutskever and Jan Leike left the company. In August 2025, OpenAI was criticized after thousands of private ChatGPT conversations were inadvertently exposed to public search engines like Google due to an experimental "share with search engines" feature. The opt-in toggle, intended to allow users to make specific chats discoverable, resulted in some discussions including personal details such as names, locations, and intimate topics appearing in search results when users accidentally enabled it while sharing links. OpenAI announced the feature's permanent removal on August 1, 2025, and the company began coordinating with search providers to remove the exposed content, emphasizing that it was not a security breach but a design flaw that heightened privacy risks. CEO Sam Altman acknowledged the issue in a podcast, noting users often treat ChatGPT as a confidant for deeply personal matters, which amplified concerns about AI handling sensitive data. Management In 2018, Musk resigned from his Board of Directors seat, citing "a potential future conflict [of interest]" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars. OpenAI stated that Musk's financial contributions were below $45 million. On March 3, 2023, Reid Hoffman resigned from his board seat, citing a desire to avoid conflicts of interest with his investments in AI companies via Greylock Partners, and his co-founding of the AI startup Inflection AI. Hoffman remained on the board of Microsoft, a major investor in OpenAI. In May 2024, Chief Scientist Ilya Sutskever resigned and was succeeded by Jakub Pachocki. Co-leader Jan Leike also departed amid concerns over safety and trust. OpenAI then signed deals with Reddit, News Corp, Axios, and Vox Media. Paul Nakasone then joined the board of OpenAI. In August 2024, cofounder John Schulman left OpenAI to join Anthropic, and OpenAI's president Greg Brockman took extended leave until November. In September 2024, CTO Mira Murati left the company. In November 2025, Lawrence Summers resigned from the board of directors. Governance and legal issues In May 2023, Sam Altman, Greg Brockman and Ilya Sutskever posted recommendations for the governance of superintelligence. They stated that superintelligence could happen within the next 10 years, allowing a "dramatically more prosperous future" and that "given the possibility of existential risk, we can't just be reactive". They proposed creating an international watchdog organization similar to IAEA to oversee AI systems above a certain capability threshold, suggesting that relatively weak AI systems on the other side should not be overly regulated. They also called for more technical safety research for superintelligences, and asked for more coordination, for example through governments launching a joint project which "many current efforts become part of". In July 2023, the FTC issued a civil investigative demand to OpenAI to investigate whether the company's data security and privacy practices to develop ChatGPT were unfair or harmed consumers (including by reputational harm) in violation of Section 5 of the Federal Trade Commission Act of 1914. These are typically preliminary investigative matters and are nonpublic, but the FTC's document was leaked. In July 2023, the FTC launched an investigation into OpenAI over allegations that the company scraped public data and published false and defamatory information. They asked OpenAI for comprehensive information about its technology and privacy safeguards, as well as any steps taken to prevent the recurrence of situations in which its chatbot generated false and derogatory content about people. The agency also raised concerns about ‘circular’ spending arrangements—for example, Microsoft extending Azure credits to OpenAI while both companies shared engineering talent—and warned that such structures could negatively affect the public. In September 2024, OpenAI's global affairs chief endorsed the UK's "smart" AI regulation during testimony to a House of Lords committee. In February 2025, OpenAI CEO Sam Altman stated that the company is interested in collaborating with the People's Republic of China, despite regulatory restrictions imposed by the U.S. government. This shift comes in response to the growing influence of the Chinese artificial intelligence company DeepSeek, which has disrupted the AI market with open models, including DeepSeek V3 and DeepSeek R1. Following DeepSeek's market emergence, OpenAI enhanced security protocols to protect proprietary development techniques from industrial espionage. Some industry observers noted similarities between DeepSeek's model distillation approach and OpenAI's methodology, though no formal intellectual property claim was filed. According to Oliver Roberts, in March 2025, the United States had 781 state AI bills or laws. OpenAI advocated for preempting state AI laws with federal laws. According to Scott Kohler, OpenAI has opposed California's AI legislation and suggested that the state bill encroaches on a more competent federal government. Public Citizen opposed a federal preemption on AI and pointed to OpenAI's growth and valuation as evidence that existing state laws have not hampered innovation. Before May 2024, OpenAI required departing employees to sign a lifelong non-disparagement agreement forbidding them from criticizing OpenAI and acknowledging the existence of the agreement. Daniel Kokotajlo, a former employee, publicly stated that he forfeited his vested equity in OpenAI in order to leave without signing the agreement. Sam Altman stated that he was unaware of the equity cancellation provision, and that OpenAI never enforced it to cancel any employee's vested equity. However, leaked documents and emails refute this claim. On May 23, 2024, OpenAI sent a memo releasing former employees from the agreement. OpenAI was sued for copyright infringement by authors Sarah Silverman, Matthew Butterick, Paul Tremblay and Mona Awad in July 2023. In September 2023, 17 authors, including George R. R. Martin, John Grisham, Jodi Picoult and Jonathan Franzen, joined the Authors Guild in filing a class action lawsuit against OpenAI, alleging that the company's technology was illegally using their copyrighted work. The New York Times also sued the company in late December 2023. In May 2024 it was revealed that OpenAI had destroyed its Books1 and Books2 training datasets, which were used in the training of GPT-3, and which the Authors Guild believed to have contained over 100,000 copyrighted books. In 2021, OpenAI developed a speech recognition tool called Whisper. OpenAI used it to transcribe more than one million hours of YouTube videos into text for training GPT-4. The automated transcription of YouTube videos raised concerns within OpenAI employees regarding potential violations of YouTube's terms of service, which prohibit the use of videos for applications independent of the platform, as well as any type of automated access to its videos. Despite these concerns, the project proceeded with notable involvement from OpenAI's president, Greg Brockman. The resulting dataset proved instrumental in training GPT-4. In February 2024, The Intercept as well as Raw Story and Alternate Media Inc. filed lawsuit against OpenAI on copyright litigation ground. The lawsuit is said to have charted a new legal strategy for digital-only publishers to sue OpenAI. On April 30, 2024, eight newspapers filed a lawsuit in the Southern District of New York against OpenAI and Microsoft, claiming illegal harvesting of their copyrighted articles. The suing publications included The Mercury News, The Denver Post, The Orange County Register, St. Paul Pioneer Press, Chicago Tribune, Orlando Sentinel, Sun Sentinel, and New York Daily News. In June 2023, a lawsuit claimed that OpenAI scraped 300 billion words online without consent and without registering as a data broker. It was filed in San Francisco, California, by sixteen anonymous plaintiffs. They also claimed that OpenAI and its partner as well as customer Microsoft continued to unlawfully collect and use personal data from millions of consumers worldwide to train artificial intelligence models. On May 22, 2024, OpenAI entered into an agreement with News Corp to integrate news content from The Wall Street Journal, the New York Post, The Times, and The Sunday Times into its AI platform. Meanwhile, other publications like The New York Times chose to sue OpenAI and Microsoft for copyright infringement over the use of their content to train AI models. In November 2024, a coalition of Canadian news outlets, including the Toronto Star, Metroland Media, Postmedia, The Globe and Mail, The Canadian Press and CBC, sued OpenAI for using their news articles to train its software without permission. In October 2024 during a New York Times interview, Suchir Balaji accused OpenAI of violating copyright law in developing its commercial LLMs which he had helped engineer. He was a likely witness in a major copyright trial against the AI company, and was one of several of its current or former employees named in court filings as potentially having documents relevant to the case. On November 26, 2024, Balaji died by suicide. His death prompted the circulation of conspiracy theories alleging that he had been deliberately silenced. California Congressman Ro Khanna endorsed calls for an investigation. On April 24, 2025, Ziff Davis sued OpenAI in Delaware federal court for copyright infringement. Ziff Davis is known for publications such as ZDNet, PCMag, CNET, IGN and Lifehacker. In April 2023, the EU's European Data Protection Board (EDPB) formed a dedicated task force on ChatGPT "to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities" based on the "enforcement action undertaken by the Italian data protection authority against OpenAI about the ChatGPT service". In late April 2024 NOYB filed a complaint with the Austrian Datenschutzbehörde against OpenAI for violating the European General Data Protection Regulation. A text created with ChatGPT gave a false date of birth for a living person without giving the individual the option to see the personal data used in the process. A request to correct the mistake was denied. Additionally, neither the recipients of ChatGPT's work nor the sources used, could be made available, OpenAI claimed. OpenAI was criticized for lifting its ban on using ChatGPT for "military and warfare". Up until January 10, 2024, its "usage policies" included a ban on "activity that has high risk of physical harm, including", specifically, "weapons development" and "military and warfare". Its new policies prohibit "[using] our service to harm yourself or others" and to "develop or use weapons". In August 2025, the parents of a 16-year-old boy who died by suicide filed a wrongful death lawsuit against OpenAI (and CEO Sam Altman), alleging that months of conversations with ChatGPT about mental health and methods of self-harm contributed to their son's death and that safeguards were inadequate for minors. OpenAI expressed condolences and said it was strengthening protections (including updated crisis response behavior and parental controls). Coverage described it as a first-of-its-kind wrongful death case targeting the company's chatbot. The complaint was filed in California state court in San Francisco. In November 2025, the Social Media Victims Law Center and Tech Justice Law Project filed seven lawsuits against OpenAI, of which four lawsuits alleged wrongful death. The suits were filed on behalf of Zane Shamblin, 23, of Texas; Amaurie Lacey, 17, of Georgia; Joshua Enneking, 26, of Florida; and Joe Ceccanti, 48, of Oregon, who each committed suicide after prolonged ChatGPT usage. In December 2025, Stein-Erik Soelberg, who was 56 years old at the time, allegedly murdered his mother Suzanne Adams. In the months prior the paranoid, delusional man often discussed his ideas with ChatGPT. Adam's estate then sued OpenAI claiming that the company shared responsibility due to the risk of chatbot psychosis despite the fact that chatbot psychosis is not a real medical diagnosis. OpenAI responded saying they will make ChatGPT safer for users disconnected from reality. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Multiple_star_system] | [TOKENS: 1942] |
Contents Star system A star system or stellar system is a small number of stars that orbit each other, bound by gravitational attraction. It may sometimes be used to refer to a single star. A large group of stars bound by gravitation is generally called a star cluster or galaxy, although, broadly speaking, they are also star systems. Star systems are not to be confused with planetary systems, which include planets and similar bodies (such as comets). Terminology A star system of two stars is known as a binary star, binary star system or physical double star. Systems with four or more components are rare, and are much less commonly found than those with 2 or 3. Multiple-star systems are called triple, ternary, or trinary if they contain three stars; quadruple or quaternary if they contain four stars; quintuple or quintenary with five stars; sextuple or sextenary with six stars; septuple or septenary with seven stars; octuple or octenary with eight stars; and nonuple or nonary with nine stars. These systems are smaller than open star clusters, which have more complex dynamics and typically have from 100 to 1,000 stars. Optical doubles and multiples Binary and multiple star systems are also known as a physical multiple stars, to distinguish them from optical multiple stars, which merely look close together when viewed from Earth. Multiple stars may refer to either optical or physical, but optical multiples do not form a star system. Triple stars that are not all gravitationally bound (and thus do not form a triple star system) might comprise a physical binary and an optical companion (such as Beta Cephei) or, in rare cases, a purely optical triple star (such as Gamma Serpentis). Abundance Research on binary and multiple stars estimates they make up about a third of the star systems in the Milky Way galaxy, with two-thirds of stars being single. Binary stars are the most common non-single stars. With multiple star systems, the number of known systems decreases exponentially with multiplicity. For example, in the 1999 revision of Tokovinin's catalog of physical multiple stars, 551 out of the 728 systems described are triple. However, because of suspected selection effects, the ability to interpret these statistics is very limited. Detection There are various methods to detect star systems and distinguish them from optical binaries multiples. These include: Orbital characteristics In systems that satisfy the assumptions of the two-body problem – including having negligible tidal effects, perturbations (from the gravity of other bodies), and transfer of mass between stars – the two stars will trace out a stable elliptical orbit around the barycenter of the system. Examples of binary systems are Sirius, Procyon and Cygnus X-1, the latter of which consists of a star and a black hole. Multiple-star systems can be divided into two main dynamical classes: Most multiple-star systems are organized in what is called a hierarchical system: the stars in the system can be divided into two smaller groups, each of which traverses a larger orbit around the system's center of mass. Each of these smaller groups must also be hierarchical, which means that they must be divided into smaller subgroups which themselves are hierarchical, and so on. Each level of the hierarchy can be treated as a two-body problem by considering close pairs as if they were a single star. In these systems there is little interaction between the orbits and the stars' motion will continue to approximate stable Keplerian orbits around the system's center of mass. For example, stable trinary systems consist of two stars in a close binary system, with a third orbiting this pair at a distance much larger than that of the binary orbit. If the inner and outer orbits are comparable in size, the system may become dynamically unstable, leading to a star being ejected from the system. EZ Aquarii is an example of a physical hierarchical triple system, which has an outer star orbiting an inner binary composed of two more red dwarf stars. Hierarchical arrangements can be organized by what Evans (1968) called mobile diagrams, which look similar to ornamental mobiles hung from the ceiling. Each level of the mobile illustrates the decomposition of the system into two or more systems with smaller size. Evans calls a diagram multiplex if there is a node with more than two children, i.e. if the decomposition of some subsystem involves two or more orbits with comparable size. Because multiplexes may be unstable, multiple stars are expected to be simplex, meaning that at each level there are exactly two children. Evans calls the number of levels in the diagram its hierarchy. Higher hierarchies are also possible. Most of these higher hierarchies either are stable or suffer from internal perturbations. Others consider complex multiple stars will in time theoretically disintegrate into less complex multiple stars, like more common observed triples or quadruples. Trapezia are usually very young, unstable systems. These are thought to form in stellar nurseries, and quickly fragment into stable multiple stars, which in the process may eject components as galactic high-velocity stars. They are named after the multiple star system known as the Trapezium Cluster in the heart of the Orion Nebula. Such systems are not rare, and commonly appear close to or within bright nebulae. These stars have no standard hierarchical arrangements, but compete for stable orbits. This relationship is called interplay. Such stars eventually settle down to a close binary with a distant companion, with the other star(s) previously in the system ejected into interstellar space at high velocities. This dynamic may explain the runaway stars that might have been ejected during a collision of two binary star groups or a multiple system. This event is credited with ejecting AE Aurigae, Mu Columbae and 53 Arietis at above 200 km·s−1 and has been traced to the Trapezium Cluster in the Orion Nebula some two million years ago. Designations and nomenclature The components of multiple stars can be specified by appending the suffixes A, B, C, etc., to the system's designation. Suffixes such as AB may be used to denote the pair consisting of A and B. The sequence of letters B, C, etc. may be assigned in order of separation from the component A. Components discovered close to an already known component may be assigned suffixes such as Aa, Ba, and so forth. A. A. Tokovinin's Multiple Star Catalogue uses a system in which each subsystem in a mobile diagram is encoded by a sequence of digits. In the mobile diagram (d) above, for example, the widest system would be given the number 1, while the subsystem containing its primary component would be numbered 11 and the subsystem containing its secondary component would be numbered 12. Subsystems which would appear below this in the mobile diagram will be given numbers with three, four, or more digits. When describing a non-hierarchical system by this method, the same subsystem number will be used more than once; for example, a system with three visual components, A, B, and C, no two of which can be grouped into a subsystem, would have two subsystems numbered 1 denoting the two binaries AB and AC. In this case, if B and C were subsequently resolved into binaries, they would be given the subsystem numbers 12 and 13. The current nomenclature for double and multiple stars can cause confusion as binary stars discovered in different ways are given different designations (for example, discoverer designations for visual binary stars and variable star designations for eclipsing binary stars), and, worse, component letters may be assigned differently by different authors, so that, for example, one person's A can be another's C. Discussion starting in 1999 resulted in four proposed schemes to address this problem: For a designation system, identifying the hierarchy within the system has the advantage that it makes identifying subsystems and computing their properties easier. However, it causes problems when new components are discovered at a level above or intermediate to the existing hierarchy. In this case, part of the hierarchy will shift inwards. Components which are found to be nonexistent, or are later reassigned to a different subsystem, also cause problems. During the 24th General Assembly of the International Astronomical Union in 2000, the WMC scheme was endorsed and it was resolved by Commissions 5, 8, 26, 42, and 45 that it should be expanded into a usable uniform designation scheme. A sample of a catalog using the WMC scheme, covering half an hour of right ascension, was later prepared. The issue was discussed again at the 25th General Assembly in 2003, and it was again resolved by commissions 5, 8, 26, 42, and 45, as well as the Working Group on Interferometry, that the WMC scheme should be expanded and further developed. The sample WMC is hierarchically organized; the hierarchy used is based on observed orbital periods or separations. Since it contains many visual double stars, which may be optical rather than physical, this hierarchy may be only apparent. It uses upper-case letters (A, B, ...) for the first level of the hierarchy, lower-case letters (a, b, ...) for the second level, and numbers (1, 2, ...) for the third. Subsequent levels would use alternating lower-case letters and numbers, but no examples of this were found in the sample. Examples See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Falconry] | [TOKENS: 7334] |
Contents Falconry Falconry is the hunting of wild animals in their natural state and habitat by means of a trained bird of prey. Small animals are hunted; squirrels and rabbits often fall prey to these birds. Two traditional terms are used to describe a person involved in falconry: a "falconer" flies a falcon; an "austringer" (Old French origin) keeps Eurasian goshawks and uses accipiters for hunting. In modern falconry, the red-tailed hawk (Buteo jamaicensis), Harris's hawk (Parabuteo unicinctus), and the peregrine falcon (Falco perigrinus) are some of the more commonly used birds of prey. The practice of hunting with a conditioned falconry bird is also called hawking or "gamehawking", although the words hawking and hawker have become used so much to refer to petty traveling traders, that the terms "falconer" and "falconry" now apply to most use of trained birds of prey to catch game. However, many contemporary practitioners still use these words in their original meaning. In early English falconry literature, the word "falcon" referred to a female peregrine falcon only, while the word "hawk" or "hawke" referred to a female hawk. A male hawk or falcon was referred to as a "tiercel" (sometimes spelled "tercel"), as it was roughly one-third less than the female in size. This traditional Arabian sport grew throughout Europe. Falconry is also an icon of Arabian culture. The saker falcon used by Arabs for falconry is called by Arabs "Hur" i.e. Free-bird,[citation needed] and it has been used in falconry in the Arabian Peninsula since ancient times. Saker falcons are the national bird of the United Arab Emirates, Saudi Arabia, Qatar, Oman and Yemen and have been integral to Arab heritage and culture for over 9,000 years. They are the national emblem of many Arab countries. Birds used in contemporary falconry Several raptors are used in falconry. They are typically classed as: Owls are also used, although they are far less common. In determining whether a species can or should be used for falconry, the species' behaviour in a captive environment, its responsiveness to training, and its typical prey and hunting habits are considered. To some degree, a species' reputation will determine whether it is used, although this factor is somewhat harder to objectively gauge. In North America, the capable red-tailed hawk is commonly flown by beginner falconers during their apprenticeship. Opinions differ on the usefulness of the kestrel for beginners due to its inherent fragility. In the UK, beginner falconers are often permitted to acquire a larger variety of birds, but Harris's hawk and the red-tailed hawk remain the most commonly used for beginners and experienced falconers alike. Red-tailed hawks are held in high regard in the UK due to the ease of breeding them in captivity, their inherent hardiness, and their capability hunting the rabbits and hares commonly found throughout the countryside in the UK. Many falconers in the UK and North America switch to accipiters or large falcons following their introduction with easier birds. In the US, accipiters, several types of buteos, and large falcons are only allowed to be owned by falconers who hold a general license. The three kinds of falconry licenses in the United States, typically, are the apprentice class, general class, and master class. The genus Buteo, known as "buzzards" in the Old World and "hawks" in North America, has a worldwide distribution. The North American species red-tailed hawk, ferruginous hawk, and rarely, the red-shouldered hawk, are all examples of species from this genus that are used in falconry today. The red-tailed hawk is hardy and versatile, taking rabbits, hares, and squirrels; given the right conditions, it can catch the occasional duck or pheasant. The red-tailed hawk is also considered a good bird for beginners. The Eurasian common buzzard is also used, although this species requires more perseverance if rabbits are to be hunted. Parabuteo unicinctus is one of two representatives of the Parabuteo genus worldwide. The other is the white-rumped hawk (P. leucorrhous). Arguably the best rabbit or hare raptor available anywhere, Harris's hawk is also adept at catching birds. Often captive-bred, Harris's hawk is remarkably popular because of its temperament and ability. It is found in the wild living in groups or packs, and hunts cooperatively, with a social hierarchy similar to wolves. This highly social behaviour is not observed in any other bird-of-prey species, and is very adaptable to falconry. This genus is native to the Americas from southern Texas and Arizona to South America. Harris's hawk is often used in the modern technique of car hawking (or drive-by falconry), where the raptor is launched from the window of a moving car at suitable prey. The genera Astur and Accipiter are also found worldwide. Hawk expert[citation needed] Mike McDermott once said, "The attack of the accipiters is extremely swift, rapid, and violent in every way". They are well known in falconry use both in Europe and North America. The Eurasian goshawk has been trained for falconry for hundreds of years, taking a variety of birds and mammals. Other popular Accipiter species used in falconry include Cooper's hawk and the sharp-shinned hawk in North America and the European sparrowhawk in Europe and Eurasia. New Zealand is likely to be one of the few countries to use a harrier species for falconry; there, falconers successfully hunt with the Australasian harrier (Circus approximans). The genus Falco is found worldwide and has occupied a central niche in ancient and modern falconry. Most falcon species used in falconry are specialized predators, most adapted to capturing bird prey, such as the peregrine falcon and merlin. A notable exception is the use of desert falcons such the saker falcon in ancient and modern falconry in Asia and Western Asia, where hares were and are commonly taken. In North America, the prairie falcon and the gyrfalcon can capture small mammal prey such as rabbits and hares (as well as the standard gamebirds and waterfowl) in falconry, but this is rarely practiced. Young falconry apprentices in the United States often begin practicing the art with American kestrels, the smallest of the falcons in North America; debate remains on this, as they are small, fragile birds, and can die easily if neglected. Small species, such as kestrels, merlins and hobbys, are most often flown on small birds such as starlings or sparrows, but can also be used for recreational bug hawking – that is, hunting large flying insects such as dragonflies, grasshoppers, and moths. Owls (family Strigidae) are not closely related to hawks or falcons. Little is written in classic falconry that discusses the use of owls in falconry. However, at least two species have successfully been used, the Eurasian eagle-owl and the great horned owl. Successful training of owls is much different from the training of hawks and falcons, as they are hearing- rather than sight-oriented. (Owls can only see black and white, and are long-sighted.) This often leads falconers to believe that they are less intelligent, as they are distracted easily by new or unnatural noises, and they do not respond as readily to food cues. However, if trained successfully, owls show intelligence on the same level as those of hawks and falcons. The genus Aquila (all have "booted" or feathered tarsi) has a nearly worldwide distribution. The more powerful types are used in falconry; for example golden eagles (A. chrysaetos) have reportedly been used to hunt wolves in Kazakhstan, and are now most widely used by the Altaic Kazakh eagle hunters in the western Mongolian province of Bayan-Ölgii to hunt foxes, and other large prey, as they are in neighbouring Kyrgyzstan. Most are primarily ground-oriented, but occasionally take birds. Eagles are not used as widely in falconry as other birds of prey, due to the lack of versatility in the larger species (they primarily hunt over large, open ground), the greater potential danger to other people if hunted in a widely populated area, and the difficulty of training and managing an eagle. A little over 300 active falconers are using eagles in Central Asia, with 250 in western Mongolia, 50 in Kazakhstan, and smaller numbers in Kyrgyzstan and western China. Most species in the genus Haliaeetus catch and eat fish, some almost exclusively, but in countries where they are not protected, some have been effectively used in hunting for ground quarry.[citation needed] Bald eagles (H. leucocephalus) have been tried by law enforcement agencies in the Netherlands and elsewhere for catching illegal drones, though the experiment was not successful outside of trials, as the eagles were easily distracted. Husbandry, training, and equipment Main articles: Hack (falconry) and Falconry training and technique Falconry around the world Falconry is currently practiced in many countries around the world. The falconer's traditional choice of bird is the Eurasian and American goshawks and peregrine falcon. In contemporary falconry in both North America and the UK, they remain popular, although Harris' hawks and red-tailed hawks are likely more widely used. The Eurasian goshawk and the golden eagle are more commonly used in Eastern Europe than elsewhere. In the west Asia, the saker falcon is the most traditional species flown against the houbara bustard, sandgrouse, stone-curlew, other birds, and hares. Peregrines and other captive-bred imported falcons are also commonplace. Falconry remains an important part of the Arab heritage and culture. The UAE reportedly spends over US$27 million annually towards the protection and conservation of wild falcons, and has set up several state-of-the-art falcon hospitals in Abu Dhabi and Dubai. The Abu Dhabi Falcon Hospital is the largest falcon hospital in the world. Two breeding farms are in the Emirates, as well as those in Qatar and Saudi Arabia. Every year, falcon beauty contests and demonstrations take place at the ADIHEX exhibition in Abu Dhabi. Eurasian sparrowhawks were formerly used to take a range of small birds, but have since fallen out of favour due to their fragility and the availability of various American species.[citation needed] In North America and the UK, falcons usually fly only after birds. Large falcons are typically trained to fly in the "waiting-on" style, where the falcon climbs and circles above the falconer or dog and the quarry is flushed when the falcon is in the desired commanding position. Classical game hawking in the UK had a brace of peregrine falcons flown against red grouse, or merlins in "ringing" flights after skylarks. Rooks and crows are classic game for the larger falcons, and the magpie, making up in cunning what it lacks in flying ability, is another common target. Short-wings[clarification needed] can be flown in both open and wooded country against a variety of bird and small mammal prey. Most hunting with large falcons requires large, open tracts where the falcon is afforded opportunity to strike or seize its quarry before it reaches cover. Most of Europe practices similar styles of falconry, but with differing degrees of regulation. Medieval falconers often rode horses, but this is now rare with the exception of contemporary Kazakh and Mongolian falconry. In Kazakhstan, Kyrgyzstan, and Mongolia, the golden eagle is traditionally flown (often from horseback), hunting game as large as foxes and wolves. In Japan, the Eurasian goshawk has been used for centuries. Japan continues to honour its strong historical links with falconry (takagari), while adopting some modern techniques and technologies. In Australia, although falconry is not specifically illegal, it is illegal to keep any type of bird of prey in captivity without the appropriate permits. The only exemption is when the birds are kept for purposes of rehabilitation (for which a licence must still be held), and in such circumstances it may be possible for a competent falconer to teach a bird to hunt and kill wild quarry, as part of its regime of rehabilitation to good health and a fit state to be released into the wild. In New Zealand, falconry was formally legalised for one species only, the swamp/Australasian harrier (Circus approximans) in 2011. This was only possible with over 25 years of effort from both Wingspan National Bird of Prey Center and the Raptor Association of New Zealand. Falconry can only be practiced by people who have been issued a falconry permit by the Department of Conservation. Tangent aspects, such as bird abatement and raptor rehabilitation, also employ falconry techniques to accomplish their goals. Falcons can live into their midteens, with larger hawks living longer and eagles likely to see out middle-aged owners. Through the captive breeding of rescued birds, the last 30 years have had a great rebirth of the sport, with a host of innovations; falconry's popularity, through lure flying displays at country houses and game fairs, has probably never been higher in the past 300 years. Ornithologist Tim Gallagher, editor of the Cornell Lab of Ornithology's Living Bird magazine, documented his experiences with modern falconry in a 2008 book, Falcon Fever.[a] Making use of the natural relationship between raptors and their prey, falconry is now used to control pest birds and animals in urban areas, landfills, commercial buildings, hotels, and airports. Falconry centres or bird-of-prey centres house these raptors. They are responsible for many aspects of bird-of-prey conservation (through keeping the birds for education and breeding). Many conduct regular flying demonstrations and educational talks, and are popular with visitors worldwide. Such centres may also provide falconry courses, hawk walks, displays, and other experiences with these raptors. Getting into falconry requires dedication, study, and hands-on experience. In the United States, aspiring falconers must pass a written falconry exam and obtain the appropriate state and federal licenses before keeping a bird of prey. Many beginners start by studying local wildlife laws, bird care, and hunting techniques to prepare for the exam. In the United Kingdom, newcomers often begin by volunteering at a falconry centre or finding an experienced falconer to act as a mentor under the British Falconers’ Club or similar organisations. There are also structured training opportunities, such as professional falconry courses offered by institutions like Falconry Course, which provide comprehensive introductions to the sport, bird handling, and conservation practices. Clubs and organizations In the UK, the British Falconers' Club (BFC) is the oldest and largest of the falconry clubs. BFC was founded in 1927 by the surviving members of the Old Hawking Club, itself founded in 1864. Working closely with the Hawk Board, an advisory body representing the interests of UK bird of prey keepers, the BFC is in the forefront of raptor conservation, falconer education, and sustainable falconry. Established in 1927, the BFC now has a membership over 1,200 falconers. It began as a small and elite club, but it is now a sizeable democratic organisation that has members from all walks of life, flying hawks, falcons, and eagles at legal quarry throughout the British Isles. The North American Falconers Association (NAFA), founded in 1961, is the premier club for falconry in the US, Canada, and Mexico, and has members worldwide. NAFA is the primary club in the United States and has a membership from around the world. Most USA states have their own falconry clubs. Although these clubs are primarily social, they also serve to represent falconers within their states in regards to that state's wildlife regulations. The International Association for Falconry and Conservation of Birds of Prey, founded in 1968, currently represents 156 falconry clubs and conservation organisations from 87 countries worldwide, totalling over 75,000 members. The Saudi Falcons Club preserves the historical heritage associated with the falconry culture, and spreads awareness and provides training to protect falcons and flourish falconry.[tone] Captive breeding and conservation The successful and now widespread captive breeding of birds of prey began as a response to dwindling wild populations due to persistent toxins such as PCBs and DDT, systematic persecution as undesirable predators, habitat loss, and the resulting limited availability of popular species for falconry, particularly the peregrine falcon. The first known raptors to breed in captivity belonged to a German falconer named Renz Waller. In 1942–43, he produced two young peregrines in Düsseldorf in Germany. The first successful captive breeding of peregrine falcons in North America occurred in the early 1970s by the Peregrine Fund, professor and falconer Heinz Meng, and other private falconer/breeders such as David Jamieson and Les Boyd who bred the first peregrines by means of artificial insemination. In Great Britain, falconer Phillip Glasier of the Falconry Centre in Newent, Gloucestershire, was successful in obtaining young from more than 20 species of captive raptors. A cooperative effort began between various government agencies, non-government organizations, and falconers to supplement various wild raptor populations in peril. This effort was strongest in North America where significant private donations along with funding allocations through the Endangered Species Act of 1972 provided the means to continue the release of captive-bred peregrines, golden eagles, bald eagles, aplomado falcons and others. By the mid-1980s, falconers had become self-sufficient as regards sources of birds to train and fly, in addition to the immensely important conservation benefits conferred by captive breeding. Between 1972 and 2001, nearly all peregrines used for falconry in the U.S. were captive-bred from the progeny of falcons taken before the U. S. Endangered Species Act was passed, and from those few infusions of wild genes available from Canada and special circumstances. Peregrine falcons were removed from the United States' endangered species list on August 25, 1999. Finally, after years of close work with the US Fish and Wildlife Service, a limited take of wild peregrines was allowed in 2001, the first wild peregrines taken specifically for falconry in over 30 years. Some controversy has existed over the origins of captive-breeding stock used by the Peregrine Fund in the recovery of peregrine falcons throughout the contiguous United States. Several peregrine subspecies were included in the breeding stock, including birds of Eurasian origin. Due to the extirpation of the eastern subspecies (Falco peregrinus anatum), its near extirpation in the Midwest, and the limited gene pool within North American breeding stock, the inclusion of non-native subspecies was justified to optimize the genetic diversity found within the species as a whole. Such strategies are common in endangered species reintroduction scenarios, where dramatic population declines result in a genetic bottleneck and the loss of genetic diversity. Laws regulating the hunting, import and export of wild falcons vary across Asia, and effective enforcement of current national and international regulations is lacking in some regions. It is possible that the spread of captive-bred falcons in falcon markets in the Arabian Peninsula has mitigated this demand for wild falcons. Hybrid falcons The species within the genus Falco are closely related, and some pairings produce viable offspring. The heavy northern gyrfalcon and Asiatic saker are especially closely related, and whether the Altai falcon is a subspecies of the saker or descendants of naturally occurring hybrids is not known. Peregrine and prairie falcons have been observed breeding in the wild and have produced offspring. These pairings are thought to be rare, but extra-pair copulations between closely related species may occur more frequently and/or account for most natural occurring hybridization. Some male first-generation hybrids may have viable sperm, whereas very few first-generation female hybrids lay fertile eggs. Thus, naturally occurring hybridization is thought to be somewhat insignificant to gene flow in raptor species. The first hybrid falcons produced in captivity occurred in western Ireland when veteran falconer Ronald Stevens and John Morris put a male saker and a female peregrine into the same moulting mews for the spring and early summer, and the two mated and produced offspring. Captive-bred hybrid falcons have been available since the late 1970s, and enjoyed a meteoric rise in popularity in North America and the UK in the 1990s. Hybrids were initially "created" to combine the horizontal speed and size of the gyrfalcon with the good disposition and aerial ability of the peregrine. Hybrid falcons first gained large popularity throughout the Arabian Peninsula, feeding a demand for particularly large and aggressive female falcons capable and willing to take on the very large houbara bustard, the classic falconry quarry in the deserts of the West Asia. These falcons were also very popular with Arab falconers, as they tended to withstand a respiratory disease (aspergillosis from the mold genus Aspergillus) in stressful desert conditions better than other pure species from the Northern Hemisphere. Artificial selection and domestication Some believe that no species of raptor have been in captivity long enough to have undergone successful selective breeding for desired traits. Captive breeding of raptors over several generations tends to result, either deliberately, or inevitably as a result of captivity, in selection for certain traits, including: Escaped falconry birds Falconers' birds are inevitably lost on occasion, though most are found again. The main reason birds can be found again is because, during free flights, birds usually wear radio transmitters or bells. The transmitters are in the middle of the tail, on the back, or attached to the bird's legs. Records of species becoming established in Britain after escaping or being released include: In 1986, a lost captive-bred female prairie falcon (which had been cross-fostered by an adult peregrine in captivity) mated with a wild male peregrine in Utah. The prairie falcon was trapped and the eggs removed, incubated, and hatched, and the hybrid offspring were given to falconers. The wild peregrine paired with another peregrine the next year. Falconry in Hawaii is prohibited largely due to the fears of escaped non-native birds of prey becoming established on the island chain and aggravating an already rampant problem of invasive species impacts on native wildlife and plant communities. Regulations In sharp contrast to the US, falconry in Great Britain is permitted without a special license, but a restriction exists of using only captive-bred birds. In the lengthy, record-breaking debates in Westminster during the passage of the 1981 Wildlife and Countryside Bill, efforts were made by the Royal Society for the Protection of Birds and other lobby groups to have falconry outlawed, but these were successfully resisted. After a centuries-old but informal existence in Britain, the sport of falconry was finally given formal legal status in Great Britain by the Wildlife and Countryside Act 1981, which allowed it to continue, provided all captive raptors native to the UK were officially ringed and government-registered. DNA testing was also available to verify birds' origins. Since 1982, the British government's licensing requirements have been overseen by the Chief Wildlife Act Inspector for Great Britain, who is assisted by a panel of unpaid assistant inspectors. British falconers are entirely reliant upon captive-bred birds for their sport. The taking of raptors from the wild for falconry, although permitted by law under government licence, has not been allowed in recent decades. Anyone is permitted to possess legally registered or captive-bred raptors, although falconers are anxious to point out this is not synonymous with falconry, which specifically entails the hunting of live quarry with a trained bird. A raptor kept merely as a possession for pleasure (like an aviary bird), although the law may allow it, is not considered to be a falconer's bird. Birds may be used for breeding or kept after their hunting days are over, but falconers believe it is preferable that young, fit birds are flown at quarry. In the United States, falconry is legal in all states except Hawaii, and in the District of Columbia. A falconer must have a state permit to practice the sport. (Requirements for a federal permit were changed in 2008 and the program discontinued effective January 1, 2014.) Acquiring a falconry license in the United States requires an aspiring falconer to pass a written test, have equipment and facilities inspected, and serve a minimum of two years as an apprentice under a licensed falconer, during which time, the apprentice falconer may only possess one raptor. Three classes of the falconry license have a permit issued jointly by the falconer's state of residence and the federal government. The aforementioned apprentice license matriculates to a general class license, which allows the falconer to up to three raptors at one time. (Some jurisdictions may further limit this.) After a minimum of five years at general level, falconers may apply for a master class license, which allows them to keep up to five wild raptors for falconry and an unlimited number of captive-produced raptors. (All must be used for falconry.) Certain highly experienced master falconers may also apply to possess golden eagles for falconry. Within the United States, a state's regulations are limited by federal law and treaties protecting raptors. Most states afford falconers an extended hunting season relative to seasons for archery and firearms, but species to be hunted, bag limits, and possession limits remain the same for both. No extended seasons for falconry exist for the hunting of migratory birds such as waterfowl and doves. Federal regulation of falconry in North America is enforced under the statutes of the Migratory Bird Treaty Act of 1918 (MBTA), originally designed to address the rampant commercial market hunting of migratory waterfowl during the early 20th century. Birds of prey suffered extreme persecution from the early 20th century through the 1960s, where thousands of birds were shot at conspicuous migration sites, and many state wildlife agencies issued bounties for carcasses. Due to widespread persecution and further impacts to raptor populations from DDT and other toxins, the act was amended in 1972 to include birds of prey. (Eagles are also protected under the Bald and Golden Eagle Protection Act of 1959.) Under the MBTA, taking migratory birds, their eggs, feathers, or nests is illegal. Take is defined in the MBTA to "include by any means or in any manner, any attempt at hunting, pursuing, wounding, killing, possessing, or transporting any migratory bird, nest, egg, or part thereof". Falconers are allowed to trap and otherwise possess certain birds of prey and their feathers with special permits issued by the Migratory Bird Office of the U.S. Fish and Wildlife Service and by state wildlife agencies (issuers of trapping permits). The Convention on International Trade on Endangered Species of Wild Flora and Fauna (CITES) restricts the import and export of most native birds species and are listed in the CITES Appendices I, II, and III. The Wild Bird Conservation Act, legislation put into effect circa 1993, regulates importation of any CITES-listed birds into the United States. Some controversy exists over the issue of falconer's ownership of captive-bred birds of prey. Falconry permits are issued by states in a manner that entrusts falconers to "take" (trap) and possess permitted birds and use them only for permitted activities, but does not transfer legal ownership. No legal distinction is made between native wild-trapped vs. captive-bred birds of the same species. This legal position is designed to discourage the commercial exploitation of native wildlife. History Evidence suggests that the art of falconry may have begun in Mesopotamia, with the earliest accounts dating to around 2,000 BC. Also, some raptor representations are in the northern Altai, western Mongolia. The falcon was a symbolic bird of ancient Mongol tribes. Some disagreement exists about whether such early accounts document the practice of falconry (from the Epic of Gilgamesh and others) or are misinterpreted depictions of humans with birds of prey.[page needed][page needed] During the Turkic Period of Central Asia (seventh century AD), concrete figures of falconers on horseback were described on the rocks in Kyrgyz. Frederick II of Hohenstaufen (1194–1250) is generally acknowledged as the most significant wellspring of traditional falconry knowledge. He is believed to have obtained firsthand knowledge of Arabic falconry during wars in the region (between June 1228 and June 1229). He obtained a copy of Moamyn's manual on falconry and had it translated into Latin by Theodore of Antioch. Frederick II himself made corrections to the translation in 1241, resulting in De Scientia Venandi per Aves. King Frederick II is most recognized for his falconry treatise, De arte venandi cum avibus (On The Art of Hunting with Birds). Written himself toward the end of his life, it is widely accepted as the first comprehensive book of falconry, but also notable in its contributions to ornithology and zoology. De arte venandi cum avibus incorporated a diversity of scholarly traditions from east to west, and is one of the earliest challenges to Aristotle's explanations of nature.[page needed] Historically, falconry was a popular sport and status symbol among the nobles of medieval Europe, and Asia.[citation needed] Many historical illustrations left in Rashid al Din's "Compendium chronicles" book described falconry of the middle centuries with Mongol images. Falconry was largely restricted to the noble classes due to the prerequisite commitment of time, money, and space. In art and other aspects of culture, such as literature, falconry remained a status symbol long after it was no longer popularly practiced. The historical significance of falconry within lower social classes may be underrepresented in the archaeological record, due to a lack of surviving evidence, especially from nonliterate nomadic and nonagrarian societies. Within nomadic societies such as the Bedouin, falconry was not practiced for recreation by noblemen. Instead, falcons were trapped and hunted on small game during the winter to supplement a very limited diet.[page needed] In the UK and parts of Europe, falconry probably reached its zenith in the 17th century, but soon faded, particularly in the late 18th and 19th centuries, as firearms became the tool of choice for hunting. (This likely took place throughout Europe and Asia in differing degrees.) Falconry in the UK had a resurgence in the late 19th and early 20th centuries when a number of falconry books were published. This revival led to the introduction of falconry in North America in the early 20th century. Colonel R. Luff Meredith is recognized as the father of North American falconry. Throughout the 20th century, modern veterinary practices and the advent of radio telemetry (transmitters attached to free-flying birds) increased the average lifespan of falconry birds, and allowed falconers to pursue quarry and styles of flight that had previously resulted in the loss of their hawk or falcon. Medieval Normans distinguished falconry from the sport of 'hawking'.[citation needed] Normans practiced falconry by horseback and 'hawking' by foot.[citation needed] An immediate impact of the Norman Conquest of England was a penchant for falconry enjoyed by Norman nobility.[citation needed] So much so, in fact, that they outlawed commoners from hunting particular lands so that nobility could freely enjoy both sports.[citation needed] Both falconry and 'hawking' were central to Norman cultural identity in medieval times.[citation needed] Normans transported their falcons on a frame called a cadge.[citation needed] The often-quoted Book of Saint Albans or Boke of St Albans, first printed in 1486, often attributed to Dame Juliana Berners, provides this hierarchy of hawks and the social ranks for which each bird was supposedly appropriate. This list, however, was mistaken in several respects. The relevance of the "Boke" to practical falconry past or present is extremely tenuous, and veteran British falconer Phillip Glasier dismissed it as "merely a formalised and rather fanciful listing of birds". A book about falconry published in 1973 says: Intangible cultural heritage In 2010, UNESCO inscribed falconry on its Representative List of the Intangible Cultural Heritage of Humanity upon the nomination of eleven countries in which it is an important element of their culture: Belgium, the Czech Republic, France, South Korea, Mongolia, Morocco, Qatar, Saudi Arabia, Spain, Syria, and the United Arab Emirates. Austria and Hungary were added in 2012; Germany, Italy, Kazakhstan, Pakistan, and Portugal were added in 2016; and Croatia, Ireland, Kyrgyzstan, the Netherlands, Poland, and Slovakia were added in 2021. Nominated by a total of twenty-four countries, falconry is the largest multi-national element on the Representative List. In their rationale for inscription on the list, the nominators highlighted that "Originally a means of obtaining food, falconry has acquired other values over time and has been integrated into communities as a social and recreational practice and as a way of connecting with nature. Today, falconry is practised by people of all ages in many countries. As an important cultural symbol in many of those countries, it is transmitted from generation to generation through a variety of means, including through mentoring, within families or in training clubs". Literature and film English language words and idioms derived from falconry These English language words and idioms are derived from falconry: See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/First_Jewish%E2%80%93Roman_War] | [TOKENS: 15028] |
Contents First Jewish–Roman War Major conflicts The First Jewish–Roman War (66–73/74 CE), also known as the Great Jewish Revolt,[b] the First Jewish Revolt, the War of Destruction, or the Jewish War,[c] was the first of three major Jewish rebellions against the Roman Empire. Fought in the province of Judaea, it resulted in the destruction of Jerusalem and the Jewish Temple, mass displacement, land appropriation, and the dissolution of the Jewish polity. Judaea, once independent under the Hasmoneans, fell to Rome in the first century BC. Initially a client kingdom, it later became a directly ruled province, marked by the rule of oppressive governors, socioeconomic divides, nationalist aspirations, and rising religious and ethnic tensions. In 66 AD, under Nero, unrest flared when a local Greek sacrificed a bird at the entrance of a Caesarea synagogue. Tensions escalated as Governor Gessius Florus looted the temple treasury and massacred Jerusalem's residents, sparking an uprising during which rebels killed the Roman garrison while pro-Roman officials fled. To quell the unrest, Cestius Gallus, the governor of Syria, invaded Judaea but was defeated at Bethoron and a provisional government, led by Ananus ben Ananus, was established in Jerusalem. In 67 CE, Vespasian was sent to suppress the revolt, invading Galilee and capturing Yodfat, Tarichaea, and Gamla. As rebels and refugees fled to Jerusalem, the government was overthrown, leading to infighting between Eleazar ben Simon, John of Gischala and Simon bar Giora. After Vespasian subdued most of the province, Nero's death prompted him to depart for Rome to claim the throne. His son Titus led the siege of Jerusalem, which fell in the summer of 70 AD, resulting in the Temple's destruction and the city's razing. In 71, Titus and Vespasian celebrated a triumph in Rome, and Legio X Fretensis remained in Judaea to suppress the last pockets of resistance, culminating in the fall of Masada in 73/74 CE. The war had profound consequences for the Jewish people, many being killed, displaced, or sold into slavery. The rabbinic sages emerged as leading figures and established a rabbinic center in Yavneh, marking a key moment in the development of Rabbinic Judaism as it adapted to the post-Temple reality. These events in Jewish history signify the transition from the Second Temple period to the Rabbinic period. The revolt also hastened the separation between Christianity and Judaism. The victory strengthened the new Flavian dynasty, which commemorated it through monumental constructions and coinage, imposed a punitive tax on all Jews, and increased military presence in the region. The Jewish–Roman wars culminated in the Bar Kokhba revolt (132–136 CE), the last major attempt to restore Jewish independence, which resulted in even more catastrophic consequences. Ante bellum In 63 BCE, the kingdom of Judaea was conquered by the Roman Republic, ending Jewish independence under the Hasmonean dynasty. Roman general Pompey intervened in a succession war between two brothers, Hyrcanus and Aristobolus. After capturing Jerusalem, Pompey entered the Temple's Holy of Holies; this was an act of desecration as only the High Priest was permitted to enter. The Jewish monarchy was abolished, Hyrcanus was appointed to serve exclusively as High Priest, and parts of the kingdom were transferred to Hellenistic cities or to the Roman province of Syria. In 40 BCE, Antigonus II Mattathias, Aristobolus' son, briefly regained the throne with Parthian support becoming the last Hasmonean king. He was deposed in 37 BC by Herod, who had been appointed "King of the Jews" by the Roman Senate. Herod ruled Judaea as a client kingdom. Although he succeeded in preserving a measure of autonomy, his heavy taxation, harsh repression—including executions of family members—and control over Jewish institutions fostered deep resentment. After Herod's death in 4 BC, his realm was divided among his sons: Archelaus served as ethnarch of Judea, Samaria, and Idumaea, and Herod Antipas governed Galilee and Perea. Archelaus's misrule led to his deposition in 6 AD, and the Roman Empire annexed his territories as the province of Judaea. In the following decades, Jewish–Roman relations in Judaea faced repeated crises. With the onset of direct Roman rule, the census of Quirinius, instituted by the governor of Syria, triggered an uprising led by Judas of Galilee. Judas led the "fourth philosophy", a movement that recognized God as the only king and rejected foreign rule. During the governorship of Pontius Pilate (r. 26 – 36 CE), incidents such as the introduction of Roman military standards (emblems of the army) into Jerusalem, the diversion of Temple funds for an aqueduct, and a soldier's indecent exposure near the Temple provoked unrest and bloodshed. Conflicts escalated during pilgrim festivals, as the influx of worshippers often fueled nationalistic sentiments. Under Emperor Caligula's reign (37–41 CE), imperial policy in Judaea briefly broke with earlier, more tolerant practice: his efforts to impose the imperial cult provoked crises and helped fuel anti-Jewish sentiment, leading to violent outbreaks in Alexandria, Egypt, in 38 CE. Tensions further escalated following a dispute at Yavneh (Jamnia), where the Jewish community dismantled a pagan altar. In response, Caligula ordered a statue of himself to be placed in the Temple, provoking widespread outrage. His death averted open conflict, but the episode further deepened Jewish resentment toward Roman rule. In 41 CE, with the support of Emperor Claudius, Herod Agrippa became the client king of Judaea, unifying the territories once ruled by his grandfather, Herod. This briefly restored Jewish self-governance, but after Agrippa's death in 44, Judaea reverted to direct Roman rule, expanding to include Judea, Samaria, Idumaea, Galilee, and Perea. His son, Agrippa II, ruled Chalcis and oversaw the Temple, including appointing and removing High Priests. Judaea was initially stable under restored Roman rule but soon fell into disorder. Around 48, the Romans crucified Jacob and Simon, sons of Judas of Galilee. Clashes erupted between Jews and Samaritans, and by the early 50s, the Sicarii (a group of Jewish radicals) began exploiting pilgrim festivals in Jerusalem for assassinations and intimidation. They also targeted rural landowners, destroying property to deter cooperation with Rome. Religious fanaticism grew, inspiring figures like Theudas, who claimed he would miraculously part the Jordan River but was executed by governor Fadus, and "The Egyptian", whose followers were dispersed by Antonius Felix. In 64, Gessius Florus became governor, securing the position through his wife, who was a friend of Poppaea Sabina, the wife of Emperor Nero. His ties to the imperial family gave him considerable freedom in governance. The Roman historian Tacitus described him as unfit for office, and Josephus—a Jewish commander who became a historian after his capture by the Romans—portrayed him as a ruthless official who plundered the region and imposed harsh punishments. The worsening situation under Florus led many to flee the region. Most scholars regard the Jewish War as a prime example of ancient Jewish nationalism. The revolt was driven by the pursuit of freedom, the removal of Roman control and the establishment of an independent Jewish state. Aspiration for independence grew following Herod's death and particularly after the establishment of direct imperial rule. This desire was partially fueled by the legacy of the successful Maccabean revolt against the Seleucids, which fostered the belief that a similar victory over Rome could be achievable. The Hasmonean-led Jewish state strengthened Jewish nationalistic awareness and aspirations for independence. Historian David Goodblatt pointed to similarities between the rebels' actions and ideology and those of modern national liberation movements, citing the rebels' struggle to free Judaea, their minting of coins inscribed with "Israel", and their adoption of a new symbolic era, called the "freedom of Israel," as examples. Jewish discontent was fueled by the harsh suppression of unrest and widespread perception of Roman rule as oppressive. Many Roman officials were corrupt, brutal, or inept, fueling unrest even under competent governors. Florus's governorship is described by ancient sources as the tipping point that sparked the revolt. Tacitus attributed the war to Roman misgovernance rather than Jewish rebelliousness; he noted that Jews showed restraint under harsh governors but lost patience due to Florus' actions. Josephus wrote that the Jews preferred to die in battle rather than endure prolonged suffering under Florus' governance. The concept of "zeal"—a total commitment to God's will and law, rooted in figures like Phinehas, Elijah, and Mattathias, and driven by a belief in Israel's election—is often seen as a key driver of the revolt. Though Eleazar ben Simon's faction was the only one to explicitly call itself "Zealots", historian Martin Hengel maintained that all factions rejecting foreign rule in the name of God's sole sovereignty could rightfully be included under this designation. Hengel traced this view to the intensification of concepts found in the Torah (first part of the Hebrew Bible and Judaism's central text), such as God's kingship. This ideology resurfaced in the revolt, especially among the Sicarii. Judaic scholar Philip Alexander also described the Zealots as a coalition of factions, united by a shared form of nationalism and the goal of liberating Israel by force. Historian Jonathan Price wrote that apocalyptic beliefs played a role in fueling the revolt, many rebels envisioning a divinely sanctioned cosmic struggle inspired by prophetic texts, such as the Book of Daniel, which foretold the fall of the fourth imperial power, which some Jews believed was Rome. Historian Tessa Rajak asserted that there is no evidence to suggest the insurgents were driven by messianic or end-of-days aspirations. Marxist scholars, notably Heinz Kreißig, interpreted the revolt as a class struggle between social strata, and the burning of debt records by the rebels is often cited as proof of socio-economic motives. Critics argue this view to be a case of political theory being held over evidence. Such is the case for Price, who noted there is little evidence of economic grievances; he sees the burning of debt records as a tactic for popular support, not ideology. Classicist Guy McLean Rogers wrote that debt was routine and neither a key cause nor a unifying rallying point for the rebels. Price also argued that rebel leaders lacked "class loyalty": Simon bar Giora did free slaves and target the wealthy, but he also had aristocratic support, wheras other leaders lacked any social agenda. Historian Uriel Rappaport wrote that hostility between Jews and surrounding Greek cities was the decisive factor that made the revolt inevitable, as Rome failed to address the tensions. The provincial Roman garrison was mainly drawn from Hellenistic cities, while Greek-speaking eastern provincials held key administrative roles, heightening tensions. Historian Martin Goodman counter-argued that since Jews had chosen to live in Greek cities, deep hostility was not a long-standing issue, and the ethnic violence that erupted in these cities in 66 was a consequence of rising tensions rather than the root cause of the revolt. Goodman attributes the causes of the revolt to the inability of the local elite to address economic and societal discontent, such failure being linked to their lack of legitimacy as their authority depended on the Herodians and Romans, both of whom were often despised by the populace; he also argued that elite involvement made Rome view the uprising as a full rebellion and deepened divisions within the rebel state. Initial stages of war In May 66, violence erupted in the city of Caesarea over a land dispute. Local Jews sought to buy land beside their synagogue from its Greek owner, but despite offering well above its value, he refused and built workshops that blocked access to the synagogue. Some young Jews tried to stop the construction, but Florus suppressed their actions. Prominent Jews then paid Florus eight talents to halt the work,[d] but after taking the money he failed to intervene. On Shabbat, Judaism's weekly day of rest, a Greek desecrated the synagogue by sacrificing birds at the entrance, sparking violence between the communities.[e] A Roman cavalry commander tried but failed to stop the violence, and when Jewish leaders complained to Florus, he had them arrested. Afterwards, Florus arrived in Jerusalem and seized 17 talents from the Temple treasury,[f] claiming it was for "governmental purposes". Mass protests ensued, and crowds mocked him by passing around a basket to collect alms as if he were a beggar. When the Sanhedrin—the Jewish high court—refused to surrender the offenders, Florus ordered his troops to sack the Upper Agora, a marketplace in Jerusalem's affluent Upper City, reportedly killing over 3,600 people. Among the victims were wealthy Jews of the equestrian order, who, despite being Roman citizens and exempt from such punishment, were not spared. His soldiers exceeded orders, looting and taking prisoners. Jewish princess Berenice, who was visiting the city, pleaded for restraint but was threatened by legionaries. A second massacre occurred when two cohorts (cavalry squadrons) arrived in the city. The Jews went to greet them peacefully, but were met with silence. Some, angered by this, began insulting Florus, prompting the soldiers to charge and causing a stampede toward the Antonia Fortress. Jewish fighters trapped Roman cohorts with rooftop attacks, forcing them to retreat to Herod's palace, while rebels destroyed the porticoes linking the Temple to the Antonia to block Roman access and protect the Temple treasures. Florus fled the city, leaving a cohort behind to serve as a garrison. Agrippa II hurried from Alexandria to calm the unrest, while Cestius Gallus, the Roman governor of Syria, sent an emissary who found Jerusalem loyal to Rome but opposed to Florus. Agrippa then delivered a public speech to the people of Jerusalem alongside his sister Berenice, acknowledging the failures of Roman administration but urging restraint. He argued that a small nation could not challenge the might of the Roman Empire. At first, the crowd agreed, reaffirming allegiance to the emperor. They restored damaged structures and paid the tax owed. When he urged patience with Florus until a new governor was appointed, the crowd turned on him, forcing him and Berenice to flee the city. Eleazar ben Hanania, the Temple's captain and son of an ex-High Priest, convinced the priests to cease accepting offerings from foreigners. This act ended the practice of offering sacrifices on behalf of Rome and its emperor, which the Romans viewed as affirmations of loyalty to imperial rule. According to Josephus, this event marked the foundation of the war.[g] Around this time, a faction of Sicarii led by Menahem ben Judah, a descendant of Judas of Galilee, launched a surprise assault on the desert fortress of Masada, capturing it and killing the Roman garrison. The seized weapons were transported to Jerusalem. After failing to pacify the rebels, Jerusalem's moderate leaders sought military assistance from Florus and Agrippa. In response, Agrippa dispatched 2,000 cavalrymen from Auranitis, Batanaea, and Trachonitis. These forces reinforced the moderates, who controlled the Upper City, while Eleazar ben Hanania's followers controlled the Lower City and Temple Mount. During the Jewish wood-gathering festival of Tu B'Av (in August), several Sicarii infiltrated the city and joined the rebellious faction. After several days of fighting, the rebels captured the Upper City, forcing the moderates to retreat into Herod's Palace, while others fled or went into hiding. They burned the house of ex-High Priest Ananias, the royal palaces, and the public archives, where debt records were kept, likely to win support from Jerusalem's poor. The rebels then captured the Antonia Fortress, seizing artillery and massacring the Roman garrison. With reinforcements from the Sicarii, they captured Herod's Palace, then agreed to a ceasefire with the moderates, but refused to make peace with the Roman soldiers. The Romans retreated to the towers of Phasael, Hippicus, and Mariamne, where they held out for eleven more days. During this time, the Sicarii captured and killed Ananias and his brother. In mid-September, the besieged soldiers surrendered for safe passage, but the rebels killed them all except commander Metilius, who pledged to convert to Judaism and undergo circumcision. Menahem appeared in royal attire in public, but he was soon captured, tortured, and executed by Eleazar ben Hanania's faction; many of his Sicarii followers were killed or scattered. Others, including Menahem's relative Eleazar ben Yair, withdrew to Masada. Ethnic violence spread across the region. Around the time of the garrison massacre, according to Josephus, non-Jews in Caesarea carried out an ethnic cleansing, killing about 20,000 Jews. The survivors were arrested by Florus. Hundreds of Jews were reportedly killed in Ascalon and Akko-Ptolemais; in Tyre, Hippos, and Gadara, many were executed or imprisoned. The Jews of Scythopolis initially assisted their fellow townspeople in defending the city from Jewish attackers. However, they were later relocated with their families to a grove outside the town, where they were killed by those who had fought alongside them. In Antioch, Sidon, and Apamea, the local residents spared the Jewish communities, and in Gerasa, they even escorted those who chose to leave all the way to the city's border. Upon hearing of the massacre of Jews in Caesarea, Jewish groups launched attacks on nearby villages and cities, especially in the Decapolis, including Philadelphia, Heshbon, Gerasa and Pella.[h] Cedasa, Hippos, Akko-Ptolemais, Gaba, and Caesarea were also targeted. Archaeological evidence confirms destruction in Gerasa and Gadara. Josephus also describes Sebaste, Ashkelon, Anthedon, and Gaza as destroyed by fire, although this may be an exaggeration. Violence also broke out in Alexandria when Greeks attacked Jews, capturing some alive and provoking retaliation. Roman governor Tiberius Julius Alexander—a Jew who had renounced his ancestral tradition—attempted mediation but failed, and his troops killed tens of thousands of Jews. In Judaea, Jewish forces seized the fortresses of Cypros near Jericho and Machaerus in Perea. At this point, Gallus marched from Antioch to Judaea with Legio XII Fulminata, 2,000 troops from each of Syria's three other legions, six infantry cohorts, and four cavalry units. Vassal kings Antiochus IV of Commagene, Agrippa II, and Sohaemus of Emesa, sent thousands of cavalry and infantry to reinforce his army. Irregular forces from cities like Berytus, driven by anti-Jewish sentiment, were also recruited. From his base in Akko-Ptolemais, Gallus launched a campaign in Galilee, burning Chabulon and nearby villages before marching to Caesarea. His forces captured Jaffa, killed its people, and torched the city. Cavalry units were also dispatched to ravage the toparchy (district) of Narbata, near Caesarea. The residents of Sepphoris welcomed the Romans and pledged their support. Gallus then advanced toward Jerusalem, leaving destruction in his wake. The town of Lydda, largely deserted as most residents had gone to Jerusalem for the religious festival of Sukkot (around September–October), was destroyed, and those who remained were killed. As the army continued through Bethoron and Gabaon, it was ambushed by Jewish forces, suffering heavy losses. Among the Jewish fighters were Niger the Perean Simon bar Giora, and Adiabenian princes Monobazus and Candaios. Agrippa made a final attempt at peace, but failed. In late Tishrei (September/October), Gallus encamped on Mount Scopus overlooking Jerusalem. This drove the rebels into the inner city and Temple complex. Upon entering, Gallus set fire to the Bezetha district and Timber Market to intimidate the population. For unclear reasons, he lifted the siege and retreated. Josephus suggested that Gallus could have captured the city with more determination. Historian Menahem Stern suggested that Gallus, facing strong resistance, doubted he could seize the city. Historian E. Mary Smallwood proposed that Gallus may have been concerned about the approaching winter, lack of siege equipment, the risk of ambushes in the hills, and the potential insincerity of the moderates' offer to open the gates. Gallus' retreat turned into a rout, resulting in the loss of 5,300 infantry and 480 cavalry. At the steep, narrow Bethoron pass, the Roman force fell into an ambush by archers positioned on the surrounding cliffs. Some escaped under cover of darkness but at the cost of hundreds of men. Pursued to Antipatris, the Roman forces abandoned supplies, including artillery and battering rams, which the rebels seized. Suetonius claimed the Romans lost their legionary eagle. Gallus died soon after, possibly by suicide. Scholars note the rarity of this defeat as a decisive Roman loss in a provincial uprising. The unexpected victory boosted pro-revolt factions, increasing their confidence, and many others were swept up in the enthusiasm. Some elite moderates fled to the Romans; others stayed and joined the rebels. Among those fleeing were Costobar and Saul, members of the Herodian royalty, as well as Philip, son of Iacimus, the prefect of Agrippa's army. Around the same time, a pogrom broke out in Damascus. The city's men, fearing betrayal by their wives who had converted to Judaism, locked the Jewish population in a gymnasium and, according to Josephus, killed thousands within hours. After Gallus' defeat, a popular assembly convened at the Jerusalem Temple and established a provisional government. Ananus ben Ananus, a former High Priest,[i] was appointed as one of its leaders alongside Joseph ben Gurion, a Pharisee, and other members of the city's priestly elite, including Joshua ben Gamla. The new government divided the country into military districts. Josephus was appointed commander of Galilee and Gaulanitis,[j] while Joseph ben Shimon commanded Jericho. John the Essene led the districts of Jaffa, Lydda, Emmaus, and Thamna, and Eleazar ben Ananias and Jesus ben Sappha oversaw Idumaea, with Niger the Perean, a hero of the Gallus campaign, under their command. Menasseh commanded Perea in Transjordan, and John ben Ananias was tasked with Gophna and Acrabetta. Eleazar ben Simon, who had played a role in Gallus' defeat and seized large amounts of money and spoils, was denied any formal position. Simon bar Giora, another leading figure in the victory over Gallus, was likewise overlooked. Citing the exclusion of the Zealots, scholars such as Richard Horsley argued that the government may have only feigned support for the revolt, instead seeking a compromise with Rome. Following the Temple meeting, Jerusalem's priestly leadership began minting coins—an assertion of financial autonomy and rejection of foreign rule. The coins bore Hebrew inscriptions with slogans like "Jerusalem the Holy" and "For the Freedom of Zion", later changed in the fourth year to "For the Redemption of Zion". Dated using a new revolutionary calendar (years one to five), they marked the start of a new era of independence. The silver coins—the first of their kind in Jewish history—were labeled as the "shekel of Israel", "Israel" possibly denoting the state's name. Their denominations (shekel, half-shekel, quarter-shekel) revived the biblical weight system, evoking ancient sovereignty, and the use of Hebrew symbolized Jewish nationalism and statehood. The new government ordered the destruction of Herod Antipas' palace in Tiberias due to its display of images forbidden by Jewish law, possibly to demonstrate zeal or appease rebels. Envoys were sent to Jews in the Parthian Empire to seek support against Rome. In Jerusalem, the unfinished Third Wall protecting the northern flank was completed. With no regular army since the Hasmoneans, the government struggled to build one, as most military-age men had joined rebel factions. Rebels acquired arms by stripping the dead and captured, raiding fortresses, commissioning local blacksmiths in Jerusalem, and possibly buying from suppliers connected to the Roman army. During Hanukkah, the Jewish festival commemorating the recovery of Jerusalem in the Maccabean revolt, Niger the Perean and John the Essene led an assault on Ashkelon, a city that remained under Roman control. Two successive attacks were repelled, forcing a retreat. The provisional government lacked broad support, and rival factions soon formed. Some rallied around distinct ideologies, others around charismatic leaders, and they turned their weapons not only against Rome but also against each other. In Galilee, John of Gischala, a wealthy olive oil trader, emerged as a key rebel leader. Initially opposed to the war, he changed his stance after his hometown Gush Halav was attacked by the people of Tyre and Gadara. Leading a group of peasants, refugees, and brigands, he became Josephus' main adversary, but failed to displace him. Meanwhile, Simon bar Giora led attacks on the wealthy in northern Judea. Expelled from Acrabetene, he fled to Masada, where rebels first distrusted but later accepted him into their raids. Vespasian's campaigns After Gallus's defeat, Nero appointed Vespasian—a former consul and seasoned commander—to lead the war effort. Vespasian, a man of humble origins, was chosen—according to Suetonius—for both his military effectiveness and his obscure background, which made him a politically safe choice to suppress the revolt without posing a threat to the emperor. He traveled from Corinth to Syria, assembling Legions V Macedonica and X Fretensis, while Titus, his eldest son, marched XV Apollinaris from Alexandria to Akko-Ptolemais. The Roman force was reinforced by 23 auxiliary cohortes and six alae of cavalry, likely drawn from Syria. Local rulers, including Antiochus IV of Commagene, Agrippa II, Sohaemus of Emesa, and Malchus II of Nabatea, contributed additional infantry and cavalry. In early summer 67, Vespasian established his base at Akko-Ptolemais before launching an offensive on Galilee, a heavily populated Jewish region in the north of the province. Rogers estimates that the force Vespasian commanded upon his arrival numbered about 58,000 soldiers and 10,000 slaves. Josephus, who led the defense of Galilee, claimed to have recruited 100,000 young men from the region, though this figure is widely regarded as exaggerated. The people of Sepphoris—the second-largest Jewish city in the country after Jerusalem—soon surrendered and pledged loyalty to Rome. Meanwhile, Jewish forces withdrew into fortified cities and villages, forcing the Romans into prolonged sieges. The Romans captured Gabara in the first assault, Josephus reporting that all the men were killed. The town and surrounding villages were set on fire, and survivors were enslaved. Around the same time, Titus destroyed the nearby village of Iaphia, where all the men were reportedly slain and the women and children sold into slavery. Cerialis, who commanded Legio V Macedonica, was dispatched to fight a large group of Samaritans who had gathered atop Mount Gerizim, the site of their ruined temple, killing many. Vespasian then besieged the town of Yodfat, which fell in June or July after a 47-day siege. Under Josephus's command, the defenders used several materials to absorb Roman attacks and countered with boulders and boiling oil—the earliest known use of this tactic. Arrowheads and ballista stones have been found at the site. When the city fell, the Romans massacred those outside and hunted survivors in hiding. Josephus reported 40,000 deaths, though modern research estimates around 2,000 killed and 1,200 women and infants captured. Josephus recounts that after the town's fall, he and 40 others hid in a deep pit and agreed to commit suicide by drawing lots; he was left among the last two, a scenario that later inspired the well-known "Josephus problem" in mathematics and computer science. Josephus chose to surrender rather than die, and then prophesied Vespasian's rise to emperor, prompting Vespasian to spare him. Vespasian and Titus then took a 20-day respite in Caesarea Philippi, Agrippa's capital.[k] As military operations resumed, Tiberias, a Jewish-majority city in Agrippa's realm, surrendered without resistance as pro-Roman factions prevailed. The nearby Tarichaea mounted a fierce defense. According to Josephus, its residents did not originally seek war, but the influx of outsiders into the city compelled them to fight. After the town's fall, surviving rebels took to the Sea of Galilee, engaging the Romans in naval skirmishes that resulted in heavy losses for the Jews. Josephus reports 6,700 killed, leaving the lake red with blood and filled with bodies. Afterward, Vespasian separated the local prisoners from the "foreigners" blamed for instigating the revolt; the latter were forced to travel along a guarded route to Tiberias, where, in the city's stadium, 1,200 were executed. Six thousand young men were reportedly sent to work on the Corinth Canal in Greece, others were given to Agrippa II, and 30,400 were sold into slavery. The next target was Gamla, a fortified city on a steep rocky promontory in the southern Golan, which Vespasian besieged for six weeks in the fall of 67. Archaeological finds at the site include pieces of armor, arrowheads and hundreds of ballista and catapult stones. Gamla's synagogue was seemingly repurposed into a refuge area, as indicated by fireplaces, cookpots, and storage jars buried under ballista stones. Despite heavy casualties, the Romans eventually seized the town in late October, and it was never resettled. According to Josephus, only two women survived; the rest either threw themselves into ravines or were killed by the Romans. In Gush Halav, John of Gischala opened surrender talks but used a brief Shabbat respite granted by Titus to flee with his followers. Titus encamped a few miles away at Kedasa, and when he returned, the city surrendered. The Romans also captured the fortress on Mount Tabor. Another Roman force retook Jaffa, ending rebel piracy that had disrupted naval routes and grain supplies; a storm helped by destroying the rebel fleet. With the conclusion of the Galilee campaign, Jerusalem descended into chaos, overcrowded with refugees and rebels. The Zealots, led by Eleazar ben Simon and Zachariah ben Avkilus, opposed the moderate government, continuing the anti-Roman stance of Eleazar ben Hananiah. Allied with John of Gischala, who likely arrived in late 67 AD, they executed suspected collaborators, seized the Temple, and appointed Phannias ben Samuel—an unqualified villager without priestly lineage—as High Priest by lot. In response, moderate leader Ananus ben Ananus rallied popular support to confront the Zealots. Though the Zealots launched a preemptive attack, they were overpowered and forced to retreat into the Temple. Urged by John, they sent a letter to the Idumaeans,[l] alleging that Ananus was betraying Jerusalem to Rome. The Idumaeans entered the city during a storm and, alongside the Zealots, massacred Ananus's forces and civilians alike. The Idumaeans looted the city, killed former High Priests Ananus ben Ananus and Joshua ben Gamla, and left their bodies unburied, in violation of Jewish law. Many Idumaeans later withdrew in regret; others went on to join Simon bar Giora. Through the winter of 67/68, the Zealots consolidated their control over Jerusalem through terror, holding tribunals and murdering moderates, including Niger the Perean and Joseph ben Gurion. Upon hearing of the events from deserters,[m] Vespasian decided against marching on Jerusalem, reasoning that it was wiser to let the Jews destroy one another. In spring, during the Passover feast, the Sicarii descended from Masada and raided the wealthy village of Ein Gedi on the southwestern shore of the Dead Sea; Josephus does not clarify their motivation. They reportedly killed 700 women and children, looted homes, and seized crops before returning to the fort. Similar raids on nearby villages devastated the area and attracted new recruits. While civil war raged in Jerusalem, the Romans continued their campaign. After Titus returned from Galilee to Caesarea, Vespasian advanced to Yavneh and Azotus, which were subdued and garrisoned, before he returned to Caesarea with many captives. In January 68, the leaders of Gadara in Perea sent a delegation to Vespasian to offer their surrender. As he advanced, opponents of the surrender killed a leading citizen and fled. The remaining residents dismantled the city walls, allowing Roman forces to enter and establish a garrison. Meanwhile, fugitives attempted to rally support in nearby Bethennabris, but were defeated by Roman forces. The survivors, seeking refuge in Jericho, were massacred near the Jordan River, where over 15,000 were reportedly killed, and many drowned or were captured. The Romans then captured the rest of southern Perea, capturing and garrisoning Abila, Iulias, and Besimoth, and soon controlled the entire region apart from Machaerus. In spring 68, Vespasian systematically subdued settlements en route to Jerusalem, delaying the siege to gather supplies from the spring harvest and to let internal factions weaken. After capturing Antipatris, Vespasian advanced, burning and destroying nearby towns. He reduced the district of Thamna and resettled Lydda and Yavneh with surrendered inhabitants. By April 68, he stationed Legio V Macedonia at Emmaus. From there, he advanced to Bethleptepha, burning the area and parts of Idumaea, before capturing Betabris and Caphartoba, reportedly killing over 10,000 people and taking 1,000 prisoners. By May–June, he camped at Corea, passed through Mabartha (later Flavia Neapolis) in Samaria, and advanced to Jericho, joining the force that took Perea. Survivors had fled to Jericho, but when the Romans arrived they found it deserted, as the inhabitants had escaped to the Jerusalem Mountains. With Jericho garrisoned, the fertile Jordan Valley came under Roman control, and another garrison was installed at Adida, east of Lydda. Vespasian then visited the Dead Sea.[n] Archaeological evidence indicates that around this time, the Qumran community, commonly linked to the Essenes, was destroyed, with some members possibly joining the rebels at Masada. Commander Lucius Annius then captured and burned Gerasa (likely a textual error for Gezer), executing many young men, enslaving women and children, and razing nearby villages, killing those who could not flee. Simon bar Giora gained strength outside Jerusalem, extending his influence over Judea. He plundered the wealthy, freed slaves, and promised gifts to his followers. The Zealots in Jerusalem viewed Simon's growing power as a threat and sent an army to confront him. After he defeated that force, he reached a stalemate with an Idumaean force before withdrawing to Nain, where he prepared to invade Idumaea. From his staging camp in Teqoa, he attempted to capture Herodium but failed. Later, at Alurus, an Idumaean officer betrayed his own army by returning from a reconnaissance mission with inflated reports of Simon's strength, prompting the commanders to surrender without resistance. Simon's subsequent successes, including the capture of Hebron, prompted the Zealots to lay ambushes in the mountain passes leading into Jerusalem. When they captured his wife, he retaliated by torturing captives and threatening to destroy Jerusalem's walls unless she was returned. The Zealots complied, and Simon paused his campaign. As the war progressed, major political upheavals were taking place in Rome. In June 68, Nero fled Rome and committed suicide, sparking a war of succession known as the "Year of the Four Emperors". After only a few months in power, Emperor Galba was murdered by supporters of his rival, Otho. Meanwhile, in Jerusalem, the Galilean Zealots plundered the homes of the wealthy, murdered men, and raped women. Following this, they reportedly began to adopt the attire and behaviors of women, imitating both their ornaments and their desires, as Josephus notes, engaging in what he describes as "unlawful pleasures".[o] Those who fled the city were killed by Simon bar Giora and his followers outside the walls. In April 69, the rivals of John of Gischala opened Jerusalem's gates for Simon ben Giora. Simon took control over much of the city, including the Upper City, with his base at the Phasael Tower, much of the Lower City, and the northern suburbs. He failed to dislodge John, who retained control over the Temple area. Simon's forces grew as the Idumaeans and nobles joined him. In June 69, Vespasian subdued the toparchies of Gophna and Acrabetta and captured the cities of Bethel and Ephraim. He then approached Jerusalem's walls, killing many and capturing others, marking his closest approach to the city. Meanwhile, Cerialis led a scorched-earth campaign in northern Idumaea, burning Caphethra and capturing Capharabis, whose residents surrendered to the Romans with olive branches, sparing the town from destruction. The Romans then destroyed Hebron and slaughtered its inhabitants. Infighting in Jerusalem persisted throughout the summer of 69. The rival factions burned the city's food supplies to weaken their opponents, severely depleting the resources needed to withstand the impending siege. According to Tacitus, "There were constant battles, treachery and arson among them, and a large store of grain was burnt." Several rabbinic sources report that extremists set fire to the supplies to compel the people to fight the Romans.[p][q] The destruction of supplies led to widespread starvation. According to Josephus, Vespasian was proclaimed emperor by his troops in Caesarea in mid-69, though the official account places his first acclamation on 1 July in Alexandria. After reluctantly accepting, he secured the support of Egypt, followed by Syria and other provinces. With military operations in Judaea paused, he traveled to Alexandria in autumn 69 and remained there with Titus during winter. With Vitellius, the reigning emperor, dead on 20 December 69, the Senate conferred imperial authority on Vespasian the next day. Command in Judaea was transferred to Titus, while Vespasian stayed in Egypt until later summer 70, when he sailed to Rome to secure the throne. Siege of Jerusalem and conclusion of the war Later in the winter of 69/70, Titus returned to Judaea with over 48,000 troops, establishing his base in Caesarea. His forces included legions V Macedonica, X Fretensis, XV Apollinaris, XII Fulminata, auxiliaries from Egypt and vassal kingdoms, and Arab allies reportedly driven by long-standing hostility toward the Jews. In early Nisan (March/April) 70, Titus camped near Gibeah, north of Jerusalem, choosing to attack from the north, where the terrain lacked natural defenses. Jerusalem, then swollen with pilgrims attending the Passover festival and with refugees, faced mounting pressure as Roman forces approached. The warring factions only united as the Romans battered its walls. Titus narrowly escaped an ambush during reconnaissance, then established camps at Mount Scopus and the Mount of Olives, repelling a Jewish surprise attack during the latter's construction. On 14 Nisan, at the onset of Passover, the Jews halted their attacks to observe the holiday; the Romans exploited the lull to position their siege forces. That night, as the sanctuary's inner gates were opened to worshippers, John's faction infiltrated the inner court, concealing their weapons, and overpowered the Zealots, who then accepted a truce. After fifteen days, the Romans breached the Third Wall and captured the northern suburbs. The Second Wall was breached soon after; though initially unable to hold the area, the Romans later secured it, destroyed northern Jerusalem, and paraded their forces for psychological effect. A famine ravaged the city,[r] with Josephus describing mass suffering and even cannibalism. Attempted escapees were executed by both rebels and Romans, as Arab and Syrian auxiliaries disemboweled refugees while searching for hidden valuables. By Sivan (May/June), the Romans had completed a circumvallation wall, which effectively cut off supplies and escape routes. The defenders destroyed the siege engines targeting the Antonia Fortress by tunneling beneath them and setting them ablaze, but the fortress eventually fell, leading the Romans to turn their assault toward the Temple. The defenders burned the porticoes linking the sanctuary to the fortress to block Roman access and took refuge in the courtyards. On the eighth day of Av (July/August), the sanctuary's outer court was breached. On 10 Av, a Roman soldier hurled a burning object into the Temple, sparking a blaze that consumed the structure. According to Josephus, Titus intended to preserve the Temple as a symbol of Roman rule, and when it caught fire, he ordered the flames extinguished, but his soldiers ignored or did not hear him. The 4th-century historian Sulpicius Severus, reflecting a tradition often traced to Tacitus, claimed that Titus had explicitly ordered the Temple's destruction. Modern scholarship often favors the view that Titus authorized the destruction, though the matter remains subject to debate. Amid the fire, chaos reigned—mass suicides and indiscriminate slaughter followed.[s] The remaining structures on the Temple Mount were razed. Titus ordered the destruction of several districts, including the Acra and the Ophel, followed by the entire Lower City. On 20 Av, the Upper City was stormed. Soldiers massacred people in their homes and streets, and many who fled into tunnels were either killed or captured. According to Josephus, Titus spared only three towers of Herod's palace and a portion of Jerusalem's western wall for a Roman garrison, while the rest of the city was systematically razed. The archeological record confirms widespread destruction and burning across the city in 70 CE. After the city's fall, the elderly and infirm were killed against Titus's orders, while younger survivors were sorted: rebels executed, the strongest sent to Titus' triumph, those over 17 enslaved or executed across the empire, and children sold into slavery. John of Gischala surrendered and was sentenced to life imprisonment, and Simon bar Giora, captured after emerging from a tunnel, was brought in chains before Titus. After Jerusalem's fall, Titus toured Judaea and southern Syria, funding spectacles with Jewish captives.[t] In Caesarea Philippi, he staged executions, gladiatorial combat, and wild animal killings. For his brother Domitian's birthday, celebrated in Caesarea Maritima, 2,500 captives were slaughtered in similar games. More executions followed during Vespasian's birthday in Berytus. In the summer of 71, a triumph was celebrated in Rome to mark the victory in Judaea—the only imperial triumph ever held for the subjugation of a provincial population already under Roman rule. The event, witnessed by hundreds of thousands of spectators, featured Vespasian and Titus riding in chariots. The procession featured treasures and artworks, including tapestries, gemstones, statues, and animals. Among the treasures carried in the procession were the Temple's menorah, a golden table, possibly that of the Showbread, and "the law of the Jews", likely sacred texts taken from the Temple. According to Josephus, Jewish captives were paraded "to display their own destruction", while multi-story scaffolds showcased ivory and gold craftsmanship, illustrating scenes of the war. Simon bar Giora was paraded in the procession and, upon its end on Capitoline Hill, whipped severely and taken to the Mamertine Prison, where he was executed by hanging. In the spring of 71, Titus departed for Rome, leaving three fortresses still under rebel control. Sextus Lucilius Bassus, the new legate of Judaea, was tasked with their conquest. Herodium, located south of Jerusalem, appears to have fallen rapidly. Bassus then crossed the Jordan River to besiege Machaerus, constructing a circumvallation wall, siege camps, and an incomplete assault ramp, traces of which still exist today. The rebels capitulated after witnessing the Romans prepare Eleazar, a well-born young man who had ventured outside the fort, for crucifixion. They then negotiated terms, securing assurances of safe passage for the Jewish defenders. The Romans slaughtered all non-Jews at the site, except for a few who escaped. Bassus then pursued rebels led by Judah ben Ari in the forest of Jardes.[u] Roman cavalry surrounded the forest while infantry cut down trees and overpowered the outmatched rebels; 3,000 were reportedly killed. Bassus then died of uncertain causes. Lucius Flavius Silva succeeded Bassus and, during the winter of 72/73 (or possibly 73/74), led a force of about 8,000 troops—including Legio X Fretensis and auxiliaries—to besiege Masada, the last rebel stronghold. When its Sicarii defenders refused to surrender, he established siege camps and a circumvallation wall around the fort, along with a siege ramp, features that remain among the best-preserved examples of Roman siegecraft. The siege lasted between two and six months. According to Josephus, when it became evident that the last fortification would fall, Eleazar ben Yair, the leader of the rebels, delivered a speech advocating for collective suicide. He argued that this act would preserve their freedom, spare them from slavery, and deny their enemies a final victory. The rebels carried out the plan, each man killing his own family before taking his own life. When the Romans entered the fortress, they found that 960 of the 967 inhabitants had committed suicide. Only two women and five children survived, having concealed themselves in a cistern. Archaeological work at Masada uncovered eleven ostraca (one of which contained the name of Ben Yair, possibly used to determine the order of suicide), twenty-five skeletons of the defenders, ritual baths and a synagogue. Findings at the site support Josephus' account of the siege, though the mass suicide's historicity remains debated.[v] Aftermath The revolt's suppression had a profound impact on the Jews of Judaea. Many died in battles, sieges, and famine; cities, towns, and villages across the region suffered varying degrees of destruction. The Jewish capital of Jerusalem—praised by Pliny the Elder as "by far the most famous city of the East"—was systematically destroyed, with much of its population massacred or enslaved. Tacitus described the siege as involving "six hundred thousand" besieged people of all ages and both sexes, remarking: "Both men and women showed the same determination; and if they were to be forced to change their home, they feared life more than death." Josephus claimed that 1.1 million people died in Jerusalem, including pilgrims present for Passover—a figure widely considered exaggerated. Historian Seth Schwartz estimates the population of Judaea at roughly 1 million (half Jewish), noting that large Jewish communities survived the war. Rogers similarly interprets Josephus' number as intended to flatter the Romans and instead suggested 20,000–30,000 deaths in Jerusalem. Classicist Charles Murison suggested the 1.1 million may refer to total war losses. Aside from Jerusalem itself, Judea proper experienced the most severe devastation, particularly in the Judaean Mountains. Cities like Lod, Yavneh and their surroundings remained relatively intact. In Galilee, Tarichaea (likely Magdala) and Gabara were destroyed, but Sepphoris and Tiberias reconciled with the Romans and escaped major harm. Mixed cities saw the elimination of their Jewish populations, and the impact extended into parts of Transjordan. Furthermore, large numbers of Jews were taken captive. Josephus' report of 97,000 captives has been accepted by several scholars.[w] Many faced harsh treatment, execution, or forced labor. Some strong young men were sent into gladiatorial combat across the empire. Young of both sexes were sent to brothels. Many were sold into slavery, most of them exiled abroad. Historian Moshe David Herr estimates that a quarter of Judaea's Jews were killed and another tenth captured, effectively erasing about one-third of the province's Jewish population. Despite the devastating losses, Jewish life recovered and continued to flourish in Judaea. Jews remained the largest population group in the region, and Jewish society eventually regained enough strength to rise in revolt again during the Bar Kokhba revolt (132–136 CE). That rebellion's suppression proved even more catastrophic, leading to the widespread destruction and depopulation of Judea proper. The uprising effectively ended the already limited Jewish autonomy under Rome. The social impact was profound, particularly for the classes closely associated with the Temple. The aristocracy, including the High Priesthood, who held significant influence and amassed great wealth, collapsed entirely. Their fall, along with that of the Sanhedrin, created a leadership vacuum. The revolt significantly impacted Judaea's economy, and to a lesser extent, the broader Jewish world. The influx of pilgrims concentrated vast wealth in Jerusalem, but its destruction ended this prosperity. The Romans confiscated and auctioned the land of Jews who participated in the insurrection, affecting many landowners in Judea proper. The date and balsam groves of Jericho and Ein Gedi, along with other "royal lands", were incorporated into Vespasian's estate. The countryside was devastated; Josephus reports that all trees around Jerusalem were felled during the siege, leaving the land barren. Only a few Jews remained in Jerusalem's vicinity, which Pliny the Elder now referred to as the toparchy of Orine, a name that appears to reflect the region's mountainous terrain. The emperor took control of the area, and the Jews were forced to work it as quasi-tenants. Following the destruction of Jerusalem, the Romans imposed a new tax, the Fiscus Judaicus, on all Jews across the Empire.[x] This tax required Jews to pay an annual sum of two drachmas, replacing the half-shekel previously donated to the Temple. The funds were redirected to the rebuilding and maintenance of the Temple of Jupiter Capitolinus in Rome, which had been destroyed during the civil war of 69. The tax implicitly held all Jews in the Roman Empire responsible for the revolt, even though most had no role in the conflict. Under Domitian, tax enforcement became more stringent. Suetonius wrote that Domitian extended the tax to those who lived as Jews without openly acknowledging it and to those who hid their Jewish background. His successor, Nerva, reformed the tax system, applying it only to Jews who observed their ancestral customs. Following the revolt, Jerusalem was garrisoned by Legio X Fretensis, which remained stationed there for nearly two centuries. The Roman forces also included cavalry alae and infantry cohortes. This increased presence prompted changes in the province's administrative structure, requiring the appointment of a governor (legatus Augusti pro praetore) of ex-praetorian rank. Within this new framework, the regions of Judea and Idumaea were designated as a military zone (campus legionis) under the command of officers from Legio X. Former soldiers, along with other Roman citizens, established themselves in Judaea. Vespasian settled 800 veterans in Motza, which became a colony named Colonia Amosa or Colonia Emmaus. He also granted colony status to Caesarea, renaming it Colonia Prima Flavia Augusta Caesarensis and settling many veterans there. A large odeon was reportedly built in the city on the site of a former synagogue, using war spoils. The devastated port town of Jaffa was re-founded, and a new city, Flavia Neapolis, was founded in Samaritis, near the ruins of Shechem. The revolt led to the revocation of many privileges previously enjoyed by Jews in the diaspora. Roman authorities took measures to quell possible uprisings, focusing on individuals deemed troublemakers in Egypt and Cyrenaica, which had absorbed thousands of refugees and insurgents from Judaea. According to Josephus, a group of Sicarii fled to these regions, where they tried to incite rebellion and, even under torture, refused to acknowledge the emperor as "lord". Jewish institutions were now seen as potential sources of rebellion, leading to the closure of the Jewish temple at Leontopolis in Egypt in 72. In the spring of 71, upon arriving in Antioch, Titus faced demands from the city's residents to expel the Jews, but he refused, stating that the Jews' country had been destroyed and that no other place would accept them. The crowd then sought removal of tablets inscribed with the Jews' rights, but Titus again declined. In 73 , the Jewish aristocracy in Cyrenaica was killed. Vespasian did not openly approve, but he implicitly endorsed it by treating the responsible Roman governor leniently. In the wake of the revolt, thousands of Jewish slaves were brought to the Italian Peninsula. A tombstone from Puteoli, near Naples, mentions a captive woman from Jerusalem named Claudia Aster, the name Aster believed to be derived from Esther. The Roman poet Martial references a Jewish slave of his, described as originating from "Jerusalem destroyed by fire". Jewish slaves brought to Italy after the war are also evidenced by graffiti in Pompeii and other places in Campania, as well as possibly by Habinnas, a character who may have been Jewish, in Petronius' Satyricon. There are records of other Jews bearing the nomen "Flavius", possibly indicating descent from freed captives. Rome itself experienced a significant influx of Jewish slaves. The destruction of Jerusalem also brought Jews to the Arabian Peninsula, leading to the establishment of settlements in southern Yemen, along the coast of Ḥaḍramawt, and most notably in the Hejaz, particularly in Yathrib (later Medina), where they became prominent representatives of monotheism in pre-Islamic Arabia. Around the same period, Jews also began settling in Hispania (modern Spain and Portugal) and Gaul (modern France). Vespasian, who came from a relatively modest background, leveraged his victory to solidify his claim to the emperorship, elevate Rome's prestige, and redirect attention from the civil war that had brought him to power, heralding an era of peace reminiscent of Augustus' reign. His dynasty framed its legitimacy on triumph over a foreign enemy. The Flavians issued a series of coins inscribed with the title Judaea Capta ("Judaea has been conquered") to commemorate the subjugation of the province. Issued over a 10–12-year period, the series marked a rare instance of a provincial defeat being celebrated in Roman coinage and served as a key component of Flavian propaganda. The obverse of the coins typically featured portraits of Titus or Vespasian; while the reverse depicted symbolic imagery, including a mourning woman, representing the Jewish people, seated beneath a date palm, a symbol of Judaea. Variations in the designs included depictions of the woman bound, kneeling, or blindfolded before Nike (or Victoria), personifications of victory. Rome's city center was reshaped with victory monuments, including two triumphal arches: the Arch of Titus on the Via Sacra, completed after Titus' death in 81, and another, probably at the Circus Maximus, finished earlier that same year. The first, still standing, is widely attributed to Domitian, was dedicated by the Senate and People of Rome to the divine Titus. It features reliefs of soldiers carrying Temple spoils and Titus in a quadriga during the triumph. The second arch's inscription proclaimed Titus "subdued the Jewish people and destroyed the city of Jerusalem, a thing either sought in vain by all generals, kings and peoples before him or untried entirely".[y] The Temple spoils, including the menorah, were displayed in the newly built Temple of Peace, alongside other masterpieces of art. The temple, dedicated to Pax, the Roman goddess of peace, symbolized the restoration of peace throughout the Empire. The Colosseum, initiated by Vespasian and completed under Titus, was financed "ex manubi(i)s" (from the spoils of war), as noted in an inscription, tying its funding to the Jewish War. Construction works commemorating the victory seem to have also taken place in Syria. John Malalas, a 6th-century Byzantine chronicler, wrote that a synagogue in Daphne, near Antioch, was destroyed during the war and replaced by Vespasian with a theater, an inscription of which claimed it was founded "from the spoils of Judaea". He also describes a gate of cherubs in Antioch, established by Titus from the spoils of the Temple. Legacy The destruction of the Second Temple, as a symbol of God's presence which was central to Jewish life, created a deep religious and societal void. It ended sacrificial offerings, terminated the High Priesthood's lineage, and led to the disappearance of Jewish sectarianism. The Sadducees, whose authority depended on the Temple, dissolved due to the loss of their power base, role in the revolt, land confiscations, and the collapse of Jewish self-governance. The Essenes too disappeared from the historical record.[z] The Pharisees—who had largely opposed the revolt—survived. Their spiritual successors,[aa] the rabbinic sages, emerged as the dominant force in Judaism through the rise of the rabbinic movement, which reoriented Jewish life around Torah study and acts of loving-kindness. According to rabbinic sources,[ab] Rabban Yohanan ben Zakkai (Ribaz), a prominent Pharisaic sage, was smuggled out of besieged Jerusalem in a coffin by his students. After prophesying Vespasian's rise to emperor,[ac] he secured permission to establish a rabbinic center in Yavneh.[ad] There, a system of rabbinic scholarship began to form,[ae] laying the foundation for Rabbinic Judaism as the dominant form of Judaism in later centuries. Under Ben Zakkai and his successor Gamaliel II, various enactments adapted Jewish life to post-Temple reality, including extending Temple-related practices for observance outside the Temple. For example, the mitzvah (religious commandment) of taking the lulav was extended to all seven days of Sukkot everywhere, whereas it had previously been observed only in the Temple. The shofar was also permitted to be sounded in any courtyard when the New Year coincided with Shabbat. The prayer liturgy was formalized, including the Amidah, which was established to be recited three times daily as a substitute for the sacrificial offerings. The rabbinic reconstitution of Judaism continued over the subsequent centuries, culminating in the compilation of the Mishnah and later the two Talmuds, which became foundational texts of Jewish law. The synagogue increasingly became the center of Jewish worship and community life. Rabbinic literature describes it as a "diminished" sanctuary, stating that divine presence resides there, especially during prayer or study. Traditional synagogue worship—including sermons and scripture readings—was preserved, and new forms such as piyyut (liturgical poetry) and organized prayer emerged. The priestly class, resettled in Galilee and the diaspora, helped shape these developments by contributing to synagogue liturgy and possibly to biblical translations. Rabbinic instruction maintained that certain rituals remained exclusive to the Temple, and most synagogues are faced toward its site. The Temple's destruction is commemorated in Judaism on Tisha B'Av, a major fast day that also marks the destruction of the First Temple alongside other tragedies in Jewish history. The Western Wall, a remnant of the temple, had become a symbol of the homeland's destruction and the hope for its restoration. Following the destruction, some Jews reportedly mourned the loss by abstaining from meat and wine, while others withdrew to caves, awaiting redemption. In late antiquity, some communities even adopted the year of the Temple's destruction as a reference point for life events. Jewish apocalyptic literature experienced a resurgence, mourning the Temple's destruction while offering explanations for the events. The Apocalypse of Baruch and Fourth Ezra interpreted the destruction of the Second Temple through the lens of the First, reusing its figures, historical setting, and biblical motifs to portray contemporary events as divinely ordained and heralding the end times. Drawing on the biblical precedent of Jerusalem's restoration after the Babylonian exile, they prophesied Rome's fall and Jerusalem's renewal. Both works affirmed Jewish continuity through the Torah and the enduring validity of the covenant with God. Book 4 of the Sibylline Oracles—a collection of Jewish and later Christian prophecies—likely written after the eruption of Mount Vesuvius in 79, links the destruction to the Roman civil war, retroactively prophesying a Roman leader who would burn the Temple and devastate the land of the Jews. It also foretells Nero's return as divine retribution against Rome and the Flavians. The rabbinic response to Jerusalem's destruction is reflected in tales, traditions and exegetical writings integrated into rabbinic literature. Early rabbinic works convey profound grief and anguish, as exemplified by the Mishnah, which states in that since the destruction, "there has been no day without its curse".[af] Some texts attribute the destruction to punishment for Israel's sins and societal failings, such as weak leadership, internal divisions, misuse of wealth, and a lack of communal care. The Babylonian Talmud (Yoma 9b) explains that whereas the First Temple was destroyed due to idolatry, immorality, and bloodshed, the Second Temple fell because of the equally grave issue of groundless hatred. Another passage in the Babylonian Talmud (Gittin 55a) recounts the story of Kamsa and Bar Kamsa, in which a banquet host mistakenly invites Bar Kamsa instead of Kamsa. When Bar Kamsa is dishonored by being denied a seat, he becomes an informer to the Romans, triggering a series of events that lead to the war. Judaic scholars Moshe and David Aberbach argued that the revolt's suppression left Jews "deprived of the territorial, social, and political bases of their nationalism", forcing them to base their identity and hopes for survival on cultural and moral power. Historian Adrian Hastings wrote that following the revolt, Jews ceased to be a political entity resembling a nation-state for almost two millennia. Despite this, they preserved their national identity through collective memory, religion, and sacred texts, remaining a nation rather than just an ethnic group, eventually leading to the rise of Zionism and the establishment of modern Israel. The revolt has been identified by several scholars as one of the stages in the gradual separation between Christianity and Judaism. It led to the destruction or dispersal of the Jerusalem church, the original center of the Christian community. According to later Christian sources like Eusebius and Epiphanius,[ag] Jerusalem's Christians fled to Pella before the war following divine guidance, though the historicity of this tradition remains debated. Scholar of Judaism Philip S. Alexander argued that, in the aftermath of the Temple's destruction, Christianity attempted to appeal to Jews in Judaea but failed due to its radical doctrines and the success of the rabbinic movement. Meanwhile, Christian groups in Asia Minor and the Aegean continued to grow, relatively insulated from the war's effects. Theologian Jörg Frey contends that the Temple's destruction had only a limited impact on Christian identity, which was shaped more significantly by the development of Christology. Theologically, the destruction of the Temple was interpreted by early Christians as divine punishment for the Jewish rejection of Jesus. This idea appears in the New Testament Gospels, which include prophecies attributed to Jesus about the destruction of Jerusalem; the Gospel of Matthew may also allude to the burning of the city by Titus. The Epistle of Barnabas, one of the Biblical apocrypha, attributes the destruction to the Jews' role in bringing about the war, and presents it as evidence that God rejected the physical Temple in favor of a spiritual one, embodied in the faith of Gentile believers. By the 4th century, Church Fathers like Eusebius and John Chrysostom had fully integrated this view, portraying the destruction as both retribution and the symbolic beginning of the apostolic mission to the wider world. The eschatological view of preterism, which holds that many or all New Testament prophecies were fulfilled in the first century, interprets Jerusalem's destruction as the fulfillment of Jesus' prophecies. Partial preterists see the event as marking the end of the Old Covenant and God's judgment on Israel, while maintaining belief in a future return of Christ and final judgment. In contrast, full preterists see it as the fulfillment of all New Testament eschatology, including resurrection (understood as deliverance of believers from the condemnation of death imposed by Jewish authorities) and judgment, enacted through Christ's use of Rome's armies to destroy the Temple and inaugurate the New Covenant. Two further Jewish revolts against Rome occurred in the second century. In 115, the Diaspora Revolt erupted, with large-scale uprisings in multiple provinces and limited activity in Judaea. The causes were rooted in the Temple's destruction and the Jewish tax. Refugees and traders from Judaea are believed to have spread the ideas from the first revolt, as evidenced by the discovery of revolt coinage in these areas. The revolt's suppression led to the near-total annihilation of Jewish communities in Cyprus, Egypt, and Libya. In 132, the Jews of Judaea launched their last major effort to regain independence—the Bar Kokhba revolt—triggered by the establishment of Aelia Capitolina, a Roman colony on Jerusalem's ruins. The revolt led to widespread destruction and the near-total depopulation of Judea, many Jews being killed or sold into slavery and transported abroad. After the fall of Betar in 135, Hadrian imposed harsh anti-Jewish laws to dismantle Jewish nationalism, banned Jews from Jerusalem, and renamed the province Syria Palaestina, ending Jewish aspirations for national independence. The Jewish population had significantly declined; most Jews were concentrated in Galilee. By the late 2nd century CE, under the rabbinic patriarch Judah ha-Nasi, the Jews had reached a pragmatic coexistence with Rome. Historical sources The main primary source for the revolt is Josephus (37/38 – c. 100 CE), born Yosef ben Mattityahu, a Jewish historian of priestly descent and a native of Jerusalem, who led the defense of Galilee early in the war. After surrendering to the Romans, he was held captive for two years and gained his freedom following Vespasian's accession in 69. In 70, he accompanied Titus during the siege of Jerusalem, and in 71 he moved to Rome, where he received Roman citizenship and the name Flavius Josephus. He spent his later years living under imperial patronage and writing historical works. Josephus' first work and primary account of the revolt, The Jewish War, completed by 79, chronicles the revolt in seven volumes. Originally in his native language, probably Aramaic, he later rewrote it in Greek with assistance. Claiming to correct biased accounts, Josephus also sought to deter future revolts. His firsthand experience, supplemented by accounts from deserters and Roman records, shaped his narrative. He minimized the collective responsibility of the Jewish people for the revolt, blaming a rebellious minority,[ah] corrupt and brutal Roman governors, and divine will. Taking pride in receiving endorsement from Vespasian and Titus for the accuracy of his writings; he was likely compelled to present his account in a manner that aligned with their messages or, at the very least, did not contradict them.[ai] At the same time, his experience as a participant and eyewitness, as well as his knowledge of both Jewish and Roman worlds, renders his account an invaluable historical source. Josephus' later autobiography, Life, written as an appendix to another work, Antiquities of the Jews, focuses on his role in Galilee. It was a rebuttal to the now-lost A History of the Jewish War by Justus of Tiberias, which was published twenty years after the revolt, and which challenged Josephus's earlier narrative and religiosity. In Life, Josephus provides a detailed account of the events of 66–67, which contrasts with his first work, revealing differences in the portrayal of events. Aside from Josephus, the written sources for the revolt are limited. Tacitus' Histories, written in the early 2nd century, offers a detailed Jewish history in Book 5 as a prelude to the revolt, though his siege narrative is incomplete. Cassius Dio's account in Book 66 survives only in epitomes, while Suetonius provides occasional remarks. These sources complement and sometimes contradict Josephus, helping to refine and corroborate his account where its reliability is debated. Rabbinic literature offers insights into the war but presents challenges for historians, as it was primarily legal and theological, not historical. Oral transmission often embellished events for religious or ethical reasons, though some descriptions, like those of the famine in Jerusalem, align with external sources, confirming parts of the historical narrative. More information on the revolt can be deduced from archaeological, numismatic, and documentary evidence. Excavations at sites destroyed during the war reveal military tactics, preparations, and the impact of the sieges and battles. Jewish revolt coins reflect rebel ideology, messaging, and aims. Texts such as the documents from Wadi Murabba'at, featuring dating formulas and phrases similar to revolt coinage, shed light on daily life and legal matters during the uprising. See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Alexandria] | [TOKENS: 12823] |
Contents Alexandria Alexandria[a] is a major city in Egypt. Lying at the western edge of the Nile River Delta, it extends about 40 km (25 mi) along the country's northern coast. It is Egypt's principal seaport, the second largest city after Cairo, and the largest city on the Mediterranean coast. Founded in 331 BC by Alexander the Great, Alexandria is one of the largest and most important cities of antiquity and a leading hub for science, culture, and scholarship. Nicknamed the "Bride of the Mediterranean" and "Pearl of the Mediterranean Coast", the city is a popular tourist destination and a major industrial centre. It is the sixth-largest city in the Arab world and in the Middle East, and the eleventh-largest city and urban area in Africa. The capital of the Alexandria Governorate, Alexandria is considered an industrial hub and is home to the Alexandria Shipyard. The city also has a large financial sector, and its ancient port Alexandria is one of the busiest ports in the country. Alexandria is the host city of the annual Alexandria Mediterranean Countries Film Festival, held at the Bibliotheca Alexandrina. The city is also the home of the Alexandria Opera House, the Alexandria Museum of Fine Arts and the Alexandria National Museum. The city hosts many sporting events, and is the home of the association football club Al Ittihad. Alexandria extends beyond its administrative municipal city limits as well as its urban agglomeration, with a population of 5,362,527 in 2023 over an area of 1,661 square kilometres (641 sq mi). Alexandria was originally established near an ancient Egyptian settlement named Rhacotis, which later became its Egyptian quarter. The city was made the capital of the Ptolemaic Kingdom and became the foremost commercial, intellectual, and cultural centre for much of the Hellenistic age and late antiquity; at one time, it was the most populous city in the ancient world. Alexandria was best known for the Lighthouse of Alexandria (Pharos), one of the Seven Wonders of the Ancient World; its Great Library, the largest in the ancient world; and the Catacombs of Kom El Shoqafa, one of the Seven Wonders of the Middle Ages. Alexandria retained its status as one of the leading cities of the Mediterranean world for almost a millennium, serving as the Egyptian capital until a new capital was founded at Fustat, now part of Cairo. The city was a major hub of early Christianity and hosted the Patriarchate of Alexandria, one of the leading Christian centers in the Eastern Roman Empire; the modern Coptic Orthodox Church and the Greek Orthodox Church of Alexandria both lay claim to this ancient heritage. By the mid seventh century, the city continued to serve as a trading hub and naval base. From the late 18th century, it was a major centre of the international shipping industry and one of the most important trading centers in the world, owing to the easy overland connection between the Mediterranean and Red Seas and the lucrative trade in Egyptian cotton. Alexandria's rebirth began in the early 19th century under Muhammad Ali, considered the founder of modern Egypt, who implemented infrastructure projects and modernisation efforts. Name Alexandria was located on the earlier Egyptian settlement, which was called Rhacotis (Ancient Greek: Ῥακῶτις, romanized: Rhakôtis), the Hellenised form of Egyptian r-ꜥ-qd(y)t (Bohairic Coptic: ⲣⲁⲕⲟϯ, romanized: Rakoti). As one of many settlements founded by Alexander the Great, the city he founded on Rhacotis was called Alexándreia hḗ kat' Aígypton (Ἀλεξάνδρεια ἡ κατ' Αἴγυπτον), which some sources translated as "Alexandria by Egypt", as the city was, at that time, in the periphery of Egypt proper (the area beside the Nile). Some of the Alexandrian and Greek populaces, e.g., Hypsicles, also referred to the city as Alexándreia hḗ prós Aígypton (Ἀλεξάνδρεια ἡ πρός Αἴγυπτον, "Alexandria near Egypt"). In the course of Roman rule in Egypt, the city's name was Latinised as Alexandrēa ad Aegyptum. In Coptic, the city continued to be referred to by its earlier name (Rakoti), with only a few exceptions. After the capture of Alexandria by the Rashiduns in AD 641, the name was Arabicised: initial Al- was re-analysed into the definite article; metathesis occurred on x, from [ks] to [sk]; and the suffix -eia was assimilated into the feminine adjectival suffix -iyya (ـِيَّة), finally rendering the name al-ʔiskandariyya (الْإِسْكَنْدَرِيَّة). History Radiocarbon dating of seashell fragments and lead contamination show human activity at the location during the period of the Old Kingdom (27th–21st centuries BC) and again in the period 1000–800 BC, followed by the absence of activity after that. From ancient sources it is known there existed a trading post at this location during the time of Rameses the Great for trade with Crete, but it had long been lost by the time of Alexander's arrival. A small Egyptian fishing village named Rhakotis (Egyptian: rꜥ-qdy.t, 'That which is built up') existed since the 13th century BC in the vicinity and eventually grew into the Egyptian quarter of the city. Just east of Alexandria (where Abu Qir Bay is now), there were in ancient times marshland and several islands. As early as the 7th century BC, there existed important port cities of Canopus and Heracleion. The latter was recently rediscovered underwater. Alexandria was founded by Alexander the Great in April 331 BC as Ἀλεξάνδρεια (Alexandreia), as one of his many city foundations. After he captured the Egyptian Satrapy from the Persians, Alexander wanted to build a large Greek city on Egypt's coast that would bear his name. He chose the site of Alexandria, envisioning the building of a causeway to the nearby island of Pharos that would generate two great natural harbours. Alexandria was intended to supersede the older Greek colony of Naucratis as a Hellenistic center in Egypt and to be the link between Greece and the rich Nile valley. A few months after the foundation, Alexander left Egypt and never returned to the city during his life. After Alexander's departure, his viceroy Cleomenes continued the expansion. The architect Dinocrates of Rhodes designed the city, using a Hippodamian grid plan. Following Alexander's death in 323 BC, his general Ptolemy Lagides took possession of Egypt and brought Alexander's body to Egypt with him. Ptolemy at first ruled from the old Egyptian capital of Memphis. In 322/321 BC he had Cleomenes executed. Finally, in 305 BC, Ptolemy declared himself Pharaoh as Ptolemy I Soter ("Savior") and moved his capital to Alexandria. Although Cleomenes was mainly in charge of overseeing Alexandria's early development, the Heptastadion and the mainland quarters seem to have been primarily Ptolemaic work. Inheriting the trade of ruined Tyre and becoming the centre of the new commerce between Europe and the Arabian and Indian East, the city grew in less than a generation to be larger than Carthage. In one century, Alexandria had become the largest city in the world and, for some centuries more, was second only to Rome. It became Egypt's main Greek city, with Greek people from diverse backgrounds. The Septuagint, a Greek version of the Tanakh, was produced there. The early Ptolemies kept the city in order and fostered the development of its museum into the leading Hellenistic centre of learning (Library of Alexandria, which faced destruction during Caesar's siege of Alexandria in 47 BC), but were careful to maintain the distinction of its population's three largest ethnicities: Greek, Egyptian and Jewish. By the time of Augustus, the city grid encompassed an area of 10 km2 (3.9 sq mi), and the total population during the Roman principate was around 500,000–600,000, which would wax and wane in the course of the next four centuries under Roman rule. According to Philo of Alexandria, in the year 38 AD, disturbances erupted between Jews and Greek citizens of Alexandria during a visit paid by King Agrippa I to Alexandria, principally over the respect paid by the Herodian nation to the Roman emperor, which quickly escalated to open affronts and violence between the two ethnic groups and the desecration of Alexandrian synagogues. This event has been called the Alexandrian pogroms. The violence was quelled after Caligula intervened and had the Roman governor, Flaccus, removed from the city. In 115 AD, large parts of Alexandria were destroyed during the Diaspora revolt, which gave Hadrian and his architect, Decriannus, an opportunity to rebuild it. In 215 AD, the emperor Caracalla visited the city and, because of some insulting satires that the inhabitants had directed at him, abruptly commanded his troops to put to death all youths capable of bearing arms. On 21 July 365 AD, Alexandria was devastated by a tsunami (365 Crete earthquake), an event annually commemorated years later as a "day of horror". In 619, Alexandria fell to the Sassanid Persians. The city was mostly uninjured by the conquest and a new palace called Tarawus was erected in the eastern part of the city, later known as Qasr Faris, "fort of the Persians". Although the Byzantine emperor Heraclius recovered it in 629, in 641 the Arabs under the general 'Amr ibn al-'As invaded it during the Muslim conquest of Egypt, after a siege that lasted 14 months. The first Arab governor of Egypt recorded to have visited Alexandria was Utba ibn Abi Sufyan, who strengthened the Arab presence and built a governor's palace in the city in 664–665. In reference to Alexandria, Ibn Battuta speaks of a number of Muslim saints that resided in the city. One such saint was Imam Borhan Oddin El Aaraj, who was said to perform miracles. Another notable figure was Yaqut al-'Arshi, a disciple of Abu Abbas El Mursi. Ibn Battuta also writes about Abu 'Abdallah al-Murshidi, a saint that lived in the Minyat of Ibn Murshed. Although al-Murshidi lived in seclusion, Ibn Battuta writes that he was regularly visited by crowds, high state officials, and even by the Sultan of Egypt at the time, al-Nasir Muhammad. Ibn Battuta also visited the Pharos lighthouse on two occasions: in 1326 he found it to be partly in ruins and in 1349 it had deteriorated to the point that it was no longer possible to enter. Throughout the late medieval period, Alexandria re-emerged as a major metropolis and the most important commercial port in Egypt and one of the most important in the Mediterranean. The Jewish traveller Benjamin of Tudela even described it as “a trading market for all nations”. Indeed, Alexandria was the outlet for all goods coming from Arabia, such as incense, and from India and South-East Asia, such as spices (pepper, cloves, cinnamon, etc.), precious stones, pearls and exotic woods like brazilwood. But it was also the outlet for goods from Africa, such as ivory and precious woods. These goods arrived in Alexandria after passing through Aden on their way to the Red Sea, then headed up the Red Sea to be unloaded in the port of Aydhab. From Aydhab, a caravan took the goods to the Nile, probably to the town of Qus. From there, the goods sailed to Alexandria. These goods then found their way to the Alexandria market alongside Egyptian products. This route was the cheapest and fastest in comparison with the land routes that reached the Mediterranean from Syria or Constantinople. Latin merchants (Venetians, Genoese, Pisans, Catalans, Provençals, etc.) thus entered this market. As early as the 12th century, the major trading cities had funduqs and consuls in Alexandria. A funduq, in this context is an area, often fortified, within the city dedicated to the community of a trading nation under the authority of a consul. The consul was responsible for adjudicating disputes between merchants of his nation, and also when a subject of the sultan lodged a complaint against a merchant of their nation.The terms of this installation were often set out in treaties between the sultans and the consuls. These treaties were part of a policy pursued by the early Mamluk sultans, who encouraged the arrival of merchants from Europe in Alexandria, since this trade not only brought the sultan considerable revenue, but also enabled him to obtain supplies of wood and iron from Europe. Later, in the 14th century, the Latin trade in Alexandria was also important for the sultans, as it enabled them to obtain supplies of mameluks (slave-soldiers) often sold by Genoese merchants. As this trade was very important to the sultans, they were keen to control the city's institutions. Indeed, in Alexandria, in addition to an Emir (governor), the sultan sent a customs inspector who answered directly to the nazir al-khas (person in charge of managing the sultan's patrimony). Customs was not only responsible for collecting customs duties, but also for the security of the port and its warehouses. Alexandria customs also played a role in commercial arbitration and was the preferred circuit for the sale of products brought in by the merchants, which took place at auction. These sales were set up to encourage the merchants to sell their products to or through the sultan, rather than selling them freely on the city's markets. Latin merchants also had jurisdictional privileges : in addition to being judged by their consul if a subject of the sultan lodged a complaint against them, Latin merchants could not be judged by the qadis (civil judges) but had to be judged by the mazalim (the sultan's courts). Alexandria lost much of its importance in international trade after Portuguese navigators discovered a new sea route to India in the late 15th century. This reduced the amount of goods that needed to be transported through the Alexandrian port, as well as the Mamluks' political power. After the Battle of Ridaniya in 1517, the city was conquered by the Ottoman Turks and remained under Ottoman rule until 1798. Alexandria lost much of its former importance to the Egyptian port city of Rosetta during the 9th to 18th centuries, and it only regained its former prominence with the construction of the Mahmoudiyah Canal in 1820.[citation needed] Alexandria figured prominently in the military operations of Napoleon's expedition to Egypt in 1798. French troops stormed the city on 2 July 1798, and it remained in their hands until the arrival of a British expedition in 1801. The British won a considerable victory over the French at the Battle of Alexandria on 21 March 1801, following which they besieged the city, which fell to them on 2 September 1801. Muhammad Ali, the Ottoman governor of Egypt, began rebuilding and redevelopment around 1810 and, by 1850, Alexandria had returned to something akin to its former glory. Egypt turned to Europe in their effort to modernise the country. Greeks, followed by other Europeans and others, began moving to the city. In the early 20th century, the city became a home for novelists and poets. In July 1882, the city came under bombardment from British naval forces and was occupied. In July 1954, the city was a target of an Israeli bombing campaign that later became known as the Lavon Affair. On 26 October 1954, Alexandria's Mansheya Square was the site of a failed assassination attempt on Gamal Abdel Nasser. Europeans began leaving Alexandria following the 1956 Suez Crisis that led to an outburst of Arab nationalism. The nationalisation of property by Nasser, which reached its highest point in 1961, drove out nearly all the rest. Geography Alexandria is located in the country of Egypt, on the southern coast of the Mediterranean. It is in the Far West Nile delta area. It is a densely populated city; its core areas belie its large administrative area. The city's geology consists of soil sediments, oolitic sand and clay, oolitic limestone (from the Middle Miocene), grey shelly dolomite, marly dolomite, oncolitic limestone and dolomite, and as well as shelly limestone. Notes: 2020 CAPMAS projection based on 2017 revised census figures, may differ significantly from 2017 census preliminary tabulations. The 14 kisms were reported simply as Alexandria city by CAPMAS in 2006 but given explosive growth definitions, likely informal, may have changed or may be set to change. Same area with 12 kisms existed in 1996. Kisms are considered 'fully urbanised' Alexandria has a borderline hot steppe and hot desert climate (Köppen climate classification: BSh/BWh). Like the rest of Egypt's northern coast, the prevailing north wind, blowing across the Mediterranean, gives the city a less severe climate than the desert hinterland. Rafah and Alexandria are the wettest places in Egypt; the other wettest places are Rosetta, Baltim, Kafr el-Dawwar, and Mersa Matruh. The city's climate is influenced by the Mediterranean Sea, moderating its temperatures, causing variable rainy winters and moderately hot and slightly prolonged summers that, at times, can be very humid; January and February are the coolest months, with daily maximum temperatures typically ranging from 12 to 18 °C (54 to 64 °F) and minimum temperatures that could reach 5 °C (41 °F). Alexandria experiences violent storms, rain and sometimes sleet and hail during the cooler months; these events, combined with a poor drainage system, have been responsible for occasional flooding in the city in the past though they rarely occur anymore. July and August are the hottest and driest months of the year, with an average daily maximum temperature of 30 °C (86 °F). The average annual rainfall is around 211 mm (8.3 in) but has been as high as 417 mm (16.4 in) Port Said, Kosseir, Baltim, Damietta and Alexandria have the least temperature variation in Egypt. The highest recorded temperature was 45 °C (113 °F) on 30 May 1961, and the coldest recorded temperature was 0 °C (32 °F) on 31 January 1994. A 2019 paper published in PLOS One estimated that under Representative Concentration Pathway 4.5, a "moderate" scenario of climate change where global warming reaches ~2.5–3 °C (4.5–5.4 °F) by 2100, the climate of Alexandria in the year 2050 would most closely resemble the current climate of Gaza City. The annual temperature would increase by 2.8 °C (5.0 °F), and the temperature of the warmest and the coldest month by 2.9 °C (5.2 °F) and 3.1 °C (5.6 °F). According to Climate Action Tracker, the current warming trajectory appears consistent with 2.7 °C (4.9 °F), which closely matches RCP 4.5. Due to its location on a Nile river delta, Alexandria is one of the most vulnerable cities to sea level rise in the entire world. According to some estimates, hundreds of thousands of people in its low-lying areas may already have to be relocated before 2030. The 2022 IPCC Sixth Assessment Report estimates that by 2050, Alexandria and 11 other major African cities (Abidjan, Algiers, Cape Town, Casablanca, Dakar, Dar es Salaam, Durban, Lagos, Lomé, Luanda and Maputo) would collectively sustain cumulative damages of US$65 billion for the "moderate" climate change scenario RCP 4.5 and US$86.5 billion for the high-emission scenario RCP 8.5, while RCP 8.5 combined with the hypothetical impact from marine ice sheet instability at high levels of warming would involve up to US$137.5 billion in damages. Additional accounting for the "low-probability, high-damage events" may increase aggregate risks to US$187 billion for the "moderate" RCP4.5, US$206 billion for RCP8.5 and US$397 billion under the high-end ice sheet instability scenario. In every single estimate, Alexandria alone bears around half of these costs. Since sea level rise would continue for about 10,000 years under every scenario of climate change, future costs of sea level rise would only increase, especially without adaptation measures. Recent studies published in Earth's Future by the American Geophysical Union indicate that rising sea levels are causing increases in coastal aquifer levels, reaching building foundations and accelerating their corrosion and potential collapse. The study predicts that in 2025, more than 7000 buildings in Alexandria will be at risk of collapse due to these groundwater processes. Ancient layout Greek Alexandria was divided into three regions: Two main streets, lined with colonnades and said to have been each about 60 m (200 ft) wide, intersected in the centre of the city, close to the point where the Sema (or Soma) of Alexander (his Mausoleum) rose. This point is very near the present mosque of Nebi Daniel; the line of the great East–West "Canopic" street is also present in modern-day Alexandria, having only slightly diverged from the line of the modern Boulevard de Rosette (now Sharae Fouad). Traces of its pavement and canal have been found near the Rosetta Gate, but remnants of streets and canals were exposed in 1899 by German excavators outside the east fortifications, which lie well within the area of the ancient city. Alexandria consisted originally of little more than the island of Pharos, which was joined to the mainland by a 1,260 m-long (4,130 ft) mole and called the Heptastadion ("seven stadia"—a stadium was a Greek unit of length measuring approximately 180 m or 590 ft). The end of this abutted on the land at the head of the present Grand Square, where the "Moon Gate" rose. All that now lies between that point and the modern "Ras al-Tin" quarter is built on the silt which gradually widened and obliterated this mole. The Ras al-Tin quarter represents all that is left of the island of Pharos, the site of the actual lighthouse having been weathered away by the sea. On the east of the mole was the Great Harbour, now an open bay; on the west lay the port of Eunostos, with its inner basin Kibotos, now vastly enlarged to form the modern harbour. In Strabo's time (latter half of the 1st century BC), the principal buildings were as follows, enumerated as they were to be seen from a ship entering the Great Harbour. The names of a few other public buildings on the mainland are known, but there is little information as to their actual position. None, however, are as famous as the building that stood on the eastern point of Pharos island. There, The Great Lighthouse, one of the Seven Wonders of the World, reputed to be 138 m (453 ft) high, was situated. The first Ptolemy began the project, and the second Ptolemy (Ptolemy II Philadelphus) completed it, at a total cost of 800 talents. It took 12 years to complete and served as a prototype for all later lighthouses in the world. The light was produced by a furnace at the top and the tower was built mostly with solid blocks of limestone. The Pharos lighthouse was destroyed by an earthquake in the 14th century, making it the second longest surviving ancient wonder, after the Great Pyramid of Giza. A temple of Hephaestus also stood on Pharos at the head of the mole. In the 1st century, the population of Alexandria contained over 180,000 adult male citizens, according to a census dated from 32 AD, in addition to a large number of freedmen, women, children and slaves. Estimates of the total population range from 216,000 to 500,000, making it one of the largest cities ever built before the Industrial Revolution and the largest pre-industrial city that was not an imperial capital.[citation needed] Administration Alexandria expanded significantly in the 20th century, with urban development expanding primarily to the east and west, as well as to the south. As for the southward expansion, the urban development in the Moharram Bey area extended to the Mahmoudiya Canal and the Gheit El-Enab area. Currently, the city has expanded eastward, aided by the Abu Qir railway, which facilitated urban growth in the Sidi Bishr, Mandara, and Maamoura areas, reaching Abu Qir. Westward expansion encompassed the areas of El-Max, El-Dekheila, and Kilometer 21, reaching Sidi Kreir, bringing the total length of Alexandria's coastline to approximately 55 km. The city has also expanded southward, reaching the Kafr El-Dawwar and Amriya areas and the borders of the Beheira Governorate. The city of Alexandria is currently divided into nine administrative districts: Economy Alexandria is considered a major contributor to the Egypt Vision 2030. The city's economy is primarily based on its role as a major industrial hub, accounting for approximately 40% of Egypt's total industrial output in sectors such as chemicals, metals, and textiles. It is also home to Egypt's largest port, a major destination for domestic tourism and a growing real estate market. The city's economy is further supported by large-scale infrastructure projects, which foster entrepreneurship, and a relatively low cost of living compared to other major Egyptian cities. Also the Egyptian Exchange has its second branch in Egypt in Alexandria. Depending also on real estate market which is growing, driven by a relatively low cost of living compared to Cairo and ongoing development projects. Alexandria actively supports the entrepreneurship sector through initiatives focused on training and consulting services for startups, particularly in green technology and tourism. Local authorities have implemented measures to support businesses during challenging times, including streamlining permit procedures and providing financial support to small and medium-sized enterprises. Alexandria Public Free Zone in Amreya, is the largest free zone in Egypt, offering tax exemptions for businesses in apparel, chemicals, and iron and steel production. Borg El Arab is also a major satellite city and industrial hub that houses the city's main international airport and numerous manufacturing plants. The presence of the Port of Alexandria has contributed to the growth and diversification of industrial activities within the city. The Alexandria's industrial zones encompass numerous sectors, including textile, pharmaceuticals, iron and steel, food processing, home appliances, and fertilizers. The Alexandria metropolitan area comprises eight industrial zones: In 2002, the cultivated area in Alexandria reached approximately 87,403 feddans, divided as follows: 10,494 feddans in the villages of Abis and Maamoura, 12,856 feddans in the areas of Khurshid and Zawida, and 64,053 feddans in the Amriya. The irrigation system in Alexandria relies on flooding from the Mahmoudiya Canal, the Nubariya Canal, the Bahig Canal, King Mariout Canal, new water projects, and rainwater. Wheat, cotton, maize, barley, and rice are among the most important crops cultivated in Alexandria. Alexandria is regarded a hub for Egypt's petroleum industry, housing major refining, production, and maintenance facilities. As of late 2025, the city continues to serve as a primary center for refining crude oil and manufacturing specialized petroleum derivatives. The city is also home to specialized petroleum products, and refining and rational companies focus on the refining of crude oil and the production of fuels and base oils. Petroleum port is managed by APC, this dedicated basin at the Alexandria Port handles the export and import of petroleum products. The city of Alexandria hosts advanced laboratories like TankOil Group, which provides accredited petroleum testing and analysis to ensure compliance with international standards. With numerous tourist attractions, most notably its mild climate, beaches, archaeological sites, and entertainment venues, tourism plays a significant role in the economy of Alexandria. In 2018, the city boasted 40 hotels of varying ratings. The city of Alexandria enjoys a high percentage of domestic tourism, but a relatively small share of international tourism. In 2006, approximately 534,235 non-Egyptian tourists visited the city, representing 2% of total inbound tourism to Egypt. Cultural tourism programs are also considered a major contributor to the city's tourism sector. The city's total fish production in 2001 was estimated at approximately 11,627 tons, and fishing takes place in the waters of the Mediterranean Sea and Lake Mariout, which covers an area of approximately 14,000 acres. In 2020, nearly ten thousand fishermen worked in the lake, which produced about 20,000 tons of fish annually, mostly tilapia, representing about 4.7% of the total fish production in Egyptian lakes. As for livestock, the number of cattle of all types in 2002 reached approximately 117,767 head, divided into 22,618 head in the Abis and El-Maamoura areas, 23,666 head in the Khurshid and Zawida areas, and 71,483 in the Amriya area. In 2020, the city had 20 veterinary units and four slaughterhouses. Cityscape Alexandria's cityscape blends ancient and modern Mediterranean architecture, featuring remains such as Pompey's Pillar and Roman Theatre amidst bustling streets, Corniche along the sea, historic Citadel of Qaitbay, the Stanley Bridge, alongside grand cultural sites such as the Alexandria Opera House, and a mix of neighborhoods and bustling port areas, all echoing its past as a Hellenistic cultural hub. Once the world's most populous city and a center of culture (Great Library, Lighthouse), Alexandria's modern form carries remnants of its Egyptian, Ptolemaic, Greek, and early Christian past, a testament to its nearly thousand-year reign as a leading Mediterranean city. Alexandria's architecture spans ancient wonders like the Lighthouse of Alexandria and the Library to a rich tapestry of European styles, such as; Renaissance, Art Deco, Neo-Classical. From its 19th-century boom, blending Mediterranean influences with Egyptian heritage, seen in landmarks such as, Montaza Palace, Saad Zaghloul Square, and modern marvels such as the Bibliotheca Alexandrina, reflecting its unique historical layers from ancient grandeur to modern cosmopolitanism. Today, architectural trends in the city of Alexandria, reflect a shift toward balancing its rich cosmopolitan heritage with modern urban needs and climate resilience. Current efforts focus on documenting and repurposing the city's early 20th-century. In late 2025, Alexandria celebrated the centenary of its Art Deco architecture, which is recognized as a Mediterranean model of the style. Historic cinemas such as the Rio Cinema on Fouad Street are being studied and preserved as living examples of modern age architecture. Architects are reimagining old, vacant structures to give them new life while reducing urban waste. Flourished with diverse nationalities, seen in buildings around Mansheya Square and the Corniche. Downtown Alexandria, often referred to as the Central District or the city's "Center," is a vibrant historical hub blending 19th-century architecture with modern commercial life. The district is full of shops, malls, historic buildings, Mediterranean restaurants, and cultural sites like the Roman Theatre at Kom El-Dekka. It's a key transportation hub with excellent connections, featuring busy streets, squares like Mahatet El Raml, and landmarks such as Stanley Bridge, offering a blend of historical and modern city life. Mahattat El Raml, is specifically a central transportation and social hub featuring the city's iconic tram system, it is also home to the Belle Époque buildings, historic hotels such as the Cecil Hotel, and traditional cafes. The Mansheya Square, historic monuments. The downtown is also matched with the city's Corniche stretching along the Mediterranean. The downtown is also home to the Alexandria National Museum. The seaside promenade is considered a major thoroughfare that stretches along the Eastern Harbor of Alexandria, Egypt. The promenade begins at the Citadel of Qaitbay, which was built in the 15th century on the site of the ancient Lighthouse of Alexandria. The road terminates at the Montaza Gardens and the historic Montaza Palace. "Pompey's Pillar, a Roman triumphal column, is one of the best-known ancient monuments still standing in Alexandria today. It is located on Alexandria's ancient acropolis—a modest hill located adjacent to the city's Arab cemetery—and was originally part of a temple colonnade. Including its pedestal, it is 30 m (99 ft) high; the shaft is of polished red granite, 2.7 m (8.9 ft) in diameter at the base, tapering to 2.4 m (7.9 ft) at the top. The shaft is 88 ft (27 m) high and made out of a single piece of granite. Its volume is 132 m3 (4,662 cu ft) and weight approximately 396 tons. Pompey's Pillar may have been erected using the same methods that were used to erect the ancient obelisks. Roger Hopkins and Mark Lehrner conducted several obelisk erecting experiments including a successful attempt to erect a 25-ton obelisk in 1999. Alexandria's catacombs, known as Kom El Shoqafa, are a short distance southwest of the pillar, consist of a multi-level labyrinth, reached via a large spiral staircase and featuring dozens of chambers adorned with sculpted pillars, statues, and other syncretic Romano-Egyptian religious symbols, burial niches, and sarcophagi. The catacombs were long forgotten by the citizens until they were discovered by accident in 1900. The most extensive ancient excavation currently being conducted in Alexandria is known as Kom El Deka. It has revealed the ancient city's well-preserved theater, and the remains of its Roman-era baths. The Alexandria Naval Unknown Soldier Memorial is a prominent historical monument located in the Manshaya district along the corniche. Built in 1933 to honor Khedive Ismail, and a statue of him was erected at the top of the monument. It was later transformed into the Monument to the Unknown Soldier in 1965, and the statue of Khedive Ismail was removed. The monument is distinguished by its location overlooking the Mediterranean Sea, which has witnessed the glories and heroic deeds of the Egyptian Navy throughout history, and serves as a memorial to its martyrs. The temple was built in the Ptolemy era and dedicated to Osiris, which finished the construction of Alexandria. It is located in Abusir, the western suburb of Alexandria in Borg el Arab city. Only the outer wall and the pylons remain from the temple. There is evidence to prove that sacred animals were worshiped there. Archaeologists found an animal necropolis near the temple. Remains of a Christian church show that the temple was used as a church in later centuries. Also found in the same area are remains of public baths built by the emperor Justinian, a seawall, quays and a bridge. Near the beach side of the area, there are the remains of a tower built by Ptolemy II Philadelphus. The tower was an exact scale replica of the destroyed Alexandrine Pharos Lighthouse. Citadel of Qaitbay is a defensive fortress located on the Mediterranean sea coast. It was established in 1477 AD (882 AH) by the mamluk Sultan Al-Ashraf Sayf al-Din Qa'it Bay. The Citadel is located on the eastern side of the northern tip of Pharos Island at the mouth of the Eastern Harbour. It was erected on the exact site of the famous Lighthouse of Alexandria, which was one of the Seven Wonders of the Ancient World. It was built on an area of 17,550 square metres. Excavation Persistent efforts have been made to explore the antiquities of Alexandria. Encouragement and help have been given by the local Archaeological Society and by many individuals. Excavations were performed in the city by Greeks seeking the tomb of Alexander the Great without success. The past and present directors of the museum have been enabled from time to time to carry out systematic excavations whenever opportunity is offered; D. G. Hogarth made tentative researches on behalf of the Egypt Exploration Fund and the Society for the Promotion of Hellenic Studies in 1895; and a German expedition worked for two years (1898–1899). But two difficulties face the would-be excavator in Alexandria: lack of space for excavation and the underwater location of some areas of interest. Since the great and growing modern city stands immediately over the ancient one, it is almost impossible to find any considerable space in which to dig, except at enormous cost. Cleopatra VII's royal quarters were inundated by earthquakes and tsunami, leading to gradual subsidence in the 4th century AD. This underwater section, containing many of the most interesting sections of the Hellenistic city, including the palace quarter, was explored in 1992 and is still being extensively investigated by the French underwater archaeologist Franck Goddio and his team. It raised a noted head of Caesarion. These are being opened up to tourists, to some controversy. The spaces that are most open are the low grounds to northeast and southwest, where it is practically impossible to get below the Roman strata. The most important results were those achieved by Dr. G. Botti, late director of the museum, in the neighbourhood of "Pompey's Pillar", where there is a good deal of open ground. Here, substructures of a large building or group of buildings have been exposed, which are perhaps part of the Serapeum. Nearby, immense catacombs and columbaria have been opened which may have been appendages of the temple. These contain one very remarkable vault with curious painted reliefs, now artificially lit and open to visitors. The objects found in these researches are in the museum, the most notable being a great basalt bull, probably once an object of cult in the Serapeum. Other catacombs and tombs have been opened in Kom El Shoqafa (Roman) and Ras El Tin (painted). The German excavation team found remains of a Ptolemaic colonnade and streets in the north-east of the city, but little else. Hogarth explored part of an immense brick structure under the mound of Kom El Deka, which may have been part of the Paneum, the Mausolea, or a Roman fortress. The making of the new foreshore led to the dredging up of remains of the Patriarchal Church; and the foundations of modern buildings are seldom laid without some objects of antiquity being discovered. Infrastructure Alexandria has a number of higher education institutions. Alexandria University is a public university that follows the Egyptian system of higher education. Many of its faculties are internationally renowned, most notably the Faculty of Medicine and the Faculty of Engineering. In addition, the Egypt-Japan University of Science and Technology in New Borg El Arab city is a research university set up in collaboration between the Japanese and Egyptian governments in 2010. The Arab Academy for Science, Technology & Maritime Transport is a semi-private educational institution that offers courses for high school, undergraduate level, and postgraduate students. It is considered the most reputable university in Egypt after the AUC American University in Cairo because of its worldwide recognition from board of engineers at UK & ABET in US. Université Senghor is a private French university that focuses on the teaching of humanities, politics and international relations, which mainly recruits students from the African continent. Other institutions of higher education in Alexandria include Alexandria Institute of Technology (AIT) and Pharos University in Alexandria. In September 2023, The Greek University of Patras announced that it is opening a branch in Alexandria, in a first-of-its-kind move by a Greek higher education institution. The Greek university of Patras branch will operate two departments, one Greek-speaking and one English-speaking in the subjects of Greek culture, Greek language and Greek philosophy. Alexandria has a long history of foreign educational institutions. The first foreign schools date to the early 19th century, when French missionaries began establishing French charitable schools to educate the Egyptians. Today, the most important French schools in Alexandria run by Catholic missionaries include Collège de la Mère de Dieu, Collège Notre Dame de Sion, Collège Saint Marc, Écoles des Soeurs Franciscaines (four different schools), École Girard, École Saint Gabriel, École Saint-Vincent de Paul, École Saint Joseph, École Sainte Catherine, and Institution Sainte Jeanne-Antide. As a reaction to the establishment of French religious institutions, a secular (laic) mission established Lycée el-Horreya, which initially followed a French system of education, but is currently run by the Egyptian government. The only school in Alexandria that completely follows the French educational system is Lycée Français d'Alexandrie (École Champollion). It is usually frequented by the children of French expatriates and diplomats in Alexandria. The Italian school is the Istituto "Don Bosco". English-language schools in Alexandria are the most popular; those in the city include; Riada American School, Riada Language School, Forsan American School, Forsan International School, Alexandria Language School, Future Language School, Future International Schools (Future IGCSE, Future American School and Future German school), Alexandria American School, British School of Alexandria, Egyptian American School, Pioneers Language School, Egyptian English Language School, Princesses Girls' School, Sidi Gaber Language School, Zahran Language School, Taymour English School, Sacred Heart Girls' School, Schutz American School, Victoria College, El Manar Language School for Girls, Kawmeya Language School, El Nasr Boys' School (previously called British Boys' School), and El Nasr Girls' College (previously called English Girls' College). There are also two German schools in Alexandria which are Deutsche Schule der Borromäerinnen (DSB of Saint Charles Borromé) and Neue Deutsche Schule Alexandria. The Montessori educational system was first introduced in Alexandria in 2009 at Alexandria Montessori. Around the 1890s, twice the percentage of women in Alexandria knew how to read compared to Cairo. As a result, specialist women's publications like al-Fatāh by Hind Nawal, the country's first women's journal, appeared. Health care in Alexandria, consists of a mix of public, military, university, and private facilities. While the city is considered a growing hub for medical tourism due to its coastal location and specialized centers, the quality of care varies significantly between the public and private sectors. The city hosts a large number of hospitals affiliated with the Ministry of Health and Population and its various bodies, including hospitals and comprehensive clinics affiliated with the General Authority for Health Insurance, such as Gamal Abdel Nasser Hospital. Alexandria also hosts several high-standard private and specialized hospitals such as Elite Hospital, Alexandria New Medical Center (ANMC), Andalusia Hospitals, Hassab Hospital, Egyptian Armed Forces Medical Complex, Saudi German Hospital Alexandria and the Alexandria Vascular Hospital (AVC). The city also includes the Alexandria University Hospitals, which are university hospitals affiliated including the main university hospital, such as; El-Miri, El-Mowasat University Hospital and El-Hadra University Hospital. The city is home to many private hospitals and medical centers such as Al-Salama Hospital, the Coptic Hospital, Alexandria Medical Center, and Qasr El-Shefa Hospital. Alexandria also hosts the African Centre for Women's Healthcare. The city's principal airport is currently Alexandria International Airport, which is located about 25 km (16 mi) away from the city centre, which is now composed of 2 Terminals. Terminal 1 is the old terminal which was opened in February 2010 whereas Terminal 2 is the Brand New Terminal and was inaugurated in 2025. Among the most important are three main roads that run parallel and connect the city center to its eastern parts: Alexandria Corniche (or El-Geish Road), which is approximately 17 km long and connects the Bahary area in the west to the Mandara area in the east; Al-Horreya Road (or Abu Qir Road), about 10 km long, linking the Shallalat area in the west to the Victoria area in the east; and Al-Mahmoudia Axis, which is 23 km long and connects the El Qabary area in the west to the Abis villages in the southeast. In addition to these, the city include other major roads such as Fouad Street, considered the oldest street in Alexandria, Suez Canal Street, El-Nabi Daniel Street, Port Said Street, Sultan Hussein Street, and the Ring Road. Alexandria also include several highways, such as the Cairo–Alexandria Desert Road, the Cairo–Alexandria Agricultural Road, and the International Coastal Road. Alexandria has four ports; namely the Western Port also known as Alexandria Port. Also the Dekhela Port is located in the west of the Alexandria Port. The Eastern Port which is mainly used as a yachting harbour, and Abu Qir Port at the northern east of the governorate. It is a commercial port for general cargo and phosphates.[citation needed] Alexandria's intracity commuter rail system extends from Misr Station (Alexandria's primary intercity railway station) to Abu Qir, parallel to the tram line. The commuter line's locomotives operate on diesel, as opposed to the overhead-electric tram.[citation needed] Alexandria plays hosts two intercity railway stations; Misr Station and Sidi Gaber railway station (in the district of Sidi Gaber in the centre of the eastern expansion in which most Alexandrines reside), both of which also serve the commuter rail line. Intercity passenger service is operated by Egyptian National Railways. Alexandria also includes a public bus transport network operated by the Alexandria Passenger Transport Authority, in addition to other private companies. The number of bus routes belonging to the Alexandria Passenger Transport Authority in July 2023 was 103 routes, and they operate various types of medium, large, regular and air-conditioned buses, and are divided into four areas: sixteen routes for Moharam Bek buses, twenty routes for Central buses, twenty-nine routes for East buses, and thirty-eight routes for West buses, in addition to microbus routes and taxis distinguished by the colors yellow and black. Muharram Bek station is considered the main bus station that includes all means of transport that connect with neighboring cities and governorates. Construction of the Alexandria Metro was due to begin in 2020 at a cost of $1.05 billion. An extensive tramway network was built in 1860 and is the oldest in Africa. The network begins at the El Raml district in the west and ends in the Victoria district in the east. Culture The Royal Library of Alexandria, in Alexandria, Egypt, was once the largest library in the world. It is generally thought to have been founded at the beginning of the 3rd century BC, during the reign of Ptolemy II of Egypt. It was likely created after his father had built what would become the first part of the library complex, the temple of the Muses—the Museion, Greek Μουσείον (from which the Modern English word museum is derived). It has been reasonably established that the library, or parts of the collection, were destroyed by fire on a number of occasions (library fires were common and replacement of handwritten manuscripts was very difficult, expensive, and time-consuming). To this day, the details of the destruction (or destructions) remain a lively source of controversy. The Bibliotheca Alexandrina was inaugurated in 2002, near the site of the old Library. The library hosts annually the Alexandria International Film Festival. The Alexandria National Museum was inaugurated 31 December 2003. It is located in a restored Italian style palace in Tariq El Horreya Street (formerly Rue Fouad), near the centre of the city. It contains about 1,800 artifacts that narrate the story of Alexandria and Egypt. Most of these pieces came from other Egyptian museums. The museum is housed in the old Al-Saad Bassili Pasha Palace, who was one of the wealthiest wood merchants in Alexandria. Construction on the site was first undertaken in 1926. The Graeco-Roman Museum was the city's main archeological museum, focused on artifacts from its Greco-Roman period. It was opened in 1892 and was closed in 2005 for extensive renovations and expansion. The museum re-opened to the public in October 2023. The Alexandria Museum of Fine Arts is a museum for Egyptian and Middle-Eastern fine art situated in the Moharam Bek neighborhood of Alexandria, Egypt. The museum houses a collection of works by Egyptian artist and a selection of works from Baroque, Romanticism, Rococo and Orientalism. In addition, noteworthy examples of carving, printing and sculpture from Egyptian and European artists. Other museums in the city include the Cavafy Museum, the Museum of Fine Arts, and the Royal Jewelry Museum. Alexandria Opera House hosts performances of classical music, Arabic music, ballet, and opera. The 1,200-seat Renaissance Revival venue was built from 1918 to 1921, designed by Georges Parcq. The opera house was inaugurated as the Teatro Mohamed Ali in the presence of King Fouad I. It became known by its current name after the 1952 Revolution. Alexandria has a long literary history, serving as both a birthplace and a muse for numerous influential writers across different eras and languages. Notable Egyptian Alexandrian writers include Theophilus I of Alexandria, the 23rd Pope of Alexandria and Patriarch of the See of Saint Mark from 385 to 412. He is remembered as a pivotal Egyptian figure in late antique Christianity, known for a prolific author, though most of his writings survive only in fragments or translations. Abdullah an-Nadeem (1843–1896) was also born and lived in the city of Alexandria. Born in the late 19th century, Tawfiq al-Hakim, a major figure in modern Egyptian literature, particularly renowned as Egypt's most famous playwright and essayist. Ibrahim Abdel Meguid, known for his "Alexandria Trilogy," which explores the city's cosmopolitan history. Edward el-Kharrat is also from Alexandria, a leading figure of the Egyptian avant-garde who often depicted Alexandria in a dreamlike, modernist style. Other writers list include Mustafa Nasr, Youssef Ziedan and Ahmed Zaki Abu Shadi. Foreign Writers associated with Alexandria are; Constantine P. Cavafy, who spent most of his life in Alexandria, and his home is now the Cavafy Museum. Lawrence Durrell is also an Alexandrian, author of The Alexandria Quartet, a series of four novels that famously depict the city's multicultural past. Other foreign writers list include; E. M. Forster and André Aciman. Contemporary literary hubs in the city include Bibliotheca Alexandrina hosts the Writing and Scripts Center, which focuses on the history and aesthetics of writing. Atelier d'Alexandrie is also a long-standing cultural center that serves as a meeting point for local writers, poets, and artists. Films set in Alexandria, range from historical epics such as; Cleopatra and ancient-focused films like Agora, to modern Egyptian dramas such as Youssef Chahine's Alexandria... Why? and Alexandria Again and Forever, exploring the city's history history, and diverse cultures. Notable examples include the historical drama No Surrender, the contemporary Messages from the Sea, and artistic films like Microphone. And films such as Justine (1969), based on Lawrence Durrell's The Alexandria Quartet, set in the city in 1938, and The Sailor from The Sailor from Gibraltar (1967), a british film also set in the city of Alexandria. The city of Alexandria celebrates a mix of Islamic, Coptic Christian, and national holidays, featuring vibrant festivals like the Eid holidays such as Eid al-Fitr, Eid al-Adha, religious observances like Coptic Christmas on January 7 and Mawlid al-Nabi, and national days such as Revolution Day on January 25 and Egyptian Armed Forces Day on October 6th, alongside cultural events such as Sham El-Nessim and unique spectacles offering diverse cultural experiences year-round. The city also celebrates July 26 as national day for Alexandria Governorate. Ramadan is the holy month of fasting, observed with special meals and community focus. A major Mediterranean hub for both classical and contemporary art, the city's art scene is characterized by its unique Egyptian and Greco-Roman heritage and a thriving modern movement of painters and sculptors. Alexandria houses several institutions dedicated to preserving and showcasing fine arts, such as Alexandria Museum of Fine Arts, featuring over 1,381 works, including paintings, sculpture, and graphics by contemporary Egyptian and international artists. It is also the main venue for the Alexandria Biennale for Mediterranean Countries. Bibliotheca Alexandrina, which hosts a permanent collection of modern art, featuring works by the Wanly brothers and sculptor Adam Henein. The Graeco-Roman Museum, displays a massive collection of ancient sculptures and artifacts from the Greek, Roman, and Coptic eras. During the Hellenistic period, poets evolving in the court of Ptolemy II Philadelphus (Philiscus of Corcyra, Lycophron, Alexander Aetolus, Sositheus,...) are currently known as the Alexandrian Pleiad. In modern times, Constantine P. Cavafy, a major Greek poet who was born and lived in Alexandria used several themes associated with this city in his work: "Alexandrian Kings", "In Alexandria, 31 B.C.", "Myres: Alexandria 340 A.D", "Kaisarion" and "The God Abandons Antony". The most famous mosque in Alexandria is Abu al-Abbas al-Mursi Mosque in Bahary. Other notable mosques in the city include Ali ibn Abi Talib mosque in Somouha, Bilal mosque, al-Gamaa al-Bahari in Mandara, Hatem mosque in Somouha, Hoda el-Islam mosque in Sidi Bishr, al-Mowasah mosque in Hadara, Sharq al-Madina mosque in Miami, al-Shohadaa mosque in Mostafa Kamel, Al Qa'ed Ibrahim Mosque, Yehia mosque in Zizinia, Sidi Gaber mosque in Sidi Gaber, Sidi Besher mosque, Rokay el-Islam mosque in Elessway, Elsadaka Mosque in Sidibesher Qebly, Elshatbi mosque and Sultan mosque. Alexandria is the base of the Salafi movements in Egypt. Al-Nour Party, which is based in the city and overwhelmingly won most of the Salafi votes in the 2011–12 parliamentary election, supports the president Abdel Fattah el-Sisi. Alexandria was once considered the third-most important city in Christianity, after Rome and Constantinople. Until 430, the Patriarch of Alexandria was second only to the bishop of Rome. The Church of Alexandria had jurisdiction over most of the continent of Africa. After the Council of Chalcedon in AD 451, the Alexandrian Church split between the Miaphysites and the Melkites. The Miaphysites went on to constitute what is known today as the Coptic Orthodox Church. The Melkites went on to constitute what is known today as the Greek Orthodox Church of Alexandria. In the 19th century, Catholic and Protestant missionaries converted some of the adherents of the Orthodox churches to their respective faiths. Today the Patriarchal seat of the Pope of the Coptic Orthodox Church is Saint Mark Cathedral (though in practice the Patriarch has long resided in Cairo). The most important Coptic Orthodox churches in Alexandria include Pope Cyril I Church in Cleopatra, Saint George's Church in Sporting, Saint Mark and Pope Peter I Church in Sidi Bishr, Saint Mary Church in Assafra, Saint Mary Church in Gianaclis, Saint Mina Church in Fleming, Saint Mina Church in Mandara and Saint Takla Haymanot's Church in Ibrahimeya. The most important Eastern Orthodox churches in Alexandria are Agioi Anárgyroi Church, Church of the Annunciation, Saint Anthony Church, Archangels Gabriel and Michael Church, Taxiarchon Church, Saint Catherine Church, Cathedral of the Dormition in Mansheya, Church of the Dormition, Prophet Elijah Church, Saint George Church, Saint Joseph Church in Fleming, Saint Joseph of Arimathea Church, Saint Mark and Saint Nektarios Chapel in Ramleh, Saint Nicholas Church, Saint Paraskevi Church, Saint Sava Cathedral in Ramleh, Saint Theodore Chapel and the Russian church of Saint Alexander Nevsky in Alexandria, which serves the Russian speaking community in the city. The Apostolic Vicariate of Alexandria in Egypt-Heliopolis-Port Said has jurisdiction over all Latin Catholics in Egypt. Member churches include Saint Catherine Church in Mansheya and Church of the Jesuits in Cleopatra. The city is also the nominal see of the Melkite Greek Catholic titular Patriarchate of Alexandria (generally vested in its leading Patriarch of Antioch) and the actual cathedral see of its Patriarchal territory of Egypt, Sudan and South Sudan, which uses the Byzantine Rite, and the nominal see of the Armenian Catholic Eparchy of Alexandria (for all Egypt and Sudan, whose actual cathedral is in Cairo), a suffragan of the Armenian Catholic Patriarch of Cilicia, using the Armenian Rite. The Saint Mark Church in Shatby, founded as part of Collège Saint Marc, is multi-denominational and holds liturgies according to Latin Catholic, Coptic Catholic and Coptic Orthodox rites. In antiquity Alexandria was a major centre of the cosmopolitan religious movement called Gnosticism (today mainly remembered as a Christian heresy). Alexandria's Jewish community declined rapidly following the 1948 Arab–Israeli War, after which negative reactions towards Zionism among Egyptians led to Jewish residents in the city, and elsewhere in Egypt, being perceived as Zionist collaborators. Most Jewish residents of Egypt moved to the newly settled Israel, France, Brazil and other countries in the 1950s and 1960s. The community once numbered 50,000 but is now estimated at below 50. The most important synagogue in Alexandria is the Eliyahu Hanavi Synagogue. Sports The main sport that interests Alexandrians is football, as is the case in the rest of Egypt and Africa. Alexandria Stadium is a multi-purpose stadium in Alexandria, Egypt. It is currently used mostly for football matches and was used for the 2006 African Cup of Nations. The stadium is the oldest stadium in Egypt, being built in 1929. The stadium holds 20,000 people. Alexandria was one of three cities that participated in hosting the African Cup of Nations in January 2006, which Egypt won. Sea sports such as surfing, jet-skiing and water polo are practiced on a lower scale. The Skateboarding culture in Egypt started in this city. The city is also home to the Alexandria Sporting Club, which is especially known for its basketball team, which traditionally provides the country's national team with key players. The city hosted the AfroBasket, the continent's most prestigious basketball tournament, on four occasions (1970, 1975, 1983, 2003). Alexandria has four stadiums: Other sports like tennis and squash are usually played in private social and sports clubs, like: Alexandria is also known as the yearly starting point of Cross Egypt Challenge and a huge celebration is conducted the night before the rally starts after all the international participants arrive to the city. Cross Egypt Challenge is an international cross-country motorcycle and scooter rally conducted throughout the most difficult tracks and roads of Egypt. International relations Alexandria is twinned with: Notable people See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Equivalent_dose] | [TOKENS: 1479] |
Contents Equivalent dose Equivalent dose (symbol H) is a dose quantity representing the stochastic health effects of low levels of ionizing radiation on the human body which represents the probability of radiation-induced cancer and genetic damage. It is derived from the physical quantity absorbed dose, but also takes into account the biological effectiveness of the radiation, which is dependent on the radiation type and energy. In the international system of units (SI), its unit of measure is the sievert (Sv). Application To enable consideration of stochastic health risk, calculations are performed to convert the physical quantity absorbed dose into equivalent dose, the details of which depend on the radiation type. For applications in radiation protection and dosimetry assessment, the International Commission on Radiological Protection (ICRP) and the International Commission on Radiation Units and Measurements (ICRU) have published recommendations and data on how to calculate equivalent dose from absorbed dose. Equivalent dose is designated by the ICRP as a "limiting quantity"; to specify exposure limits to ensure that "the occurrence of stochastic health effects is kept below unacceptable levels and that tissue reactions are avoided". This is a calculated value, as equivalent dose cannot be practically measured, and the purpose of the calculation is to generate a value of equivalent dose for comparison with observed health effects. Calculation Equivalent dose HT is calculated using the mean absorbed dose deposited in body tissue or organ T, multiplied by the radiation weighting factor WR which is dependent on the type and energy of the radiation R. The radiation weighting factor represents the relative biological effectiveness of the radiation and modifies the absorbed dose to take account of the different biological effects of various types and energies of radiation. The ICRP has assigned radiation weighting factors to specified radiation types dependent on their relative biological effectiveness, which are shown in accompanying table. Calculating equivalent dose from absorbed dose; where Thus for example, an absorbed dose of 1 Gy by alpha particles will lead to an equivalent dose of 20 Sv, and an equivalent dose of radiation is estimated to have the same biological effect as an equal amount of absorbed dose of gamma rays, which is given a weighting factor of 1. To obtain the equivalent dose for a mix of radiation types and energies, a sum is taken over all types of radiation energy doses. This takes into account the contributions of the varying biological effect of different radiation types. History The concept of equivalent dose was developed in the 1950s. In its 1990 recommendations, the ICRP revised the definitions of some radiation protection quantities, and provided new names for the revised quantities. Some regulators, notably the International Committee for Weights and Measures (CIPM) and the US Nuclear Regulatory Commission continue to use the old terminology of quality factors and dose equivalent, even though the underlying calculations have changed. Future use At the ICRP 3rd International Symposium on the System of Radiological Protection in October 2015, ICRP Task Group 79 reported on the "Use of Effective Dose as a Risk-related Radiological Protection Quantity". This included a proposal to discontinue use of equivalent dose as a separate protection quantity. This would avoid confusion between equivalent dose, effective dose and dose equivalent, and to use absorbed dose in Gy as a more appropriate quantity for limiting deterministic effects to the eye lens, skin, hands & feet. These proposals will need to go through the following stages: Units The SI unit of measure for equivalent dose is the sievert, defined as one joule per kg. In the United States the roentgen equivalent man (rem), equal to 0.01 sievert, is still in common use, although regulatory and advisory bodies are encouraging transition to sievert. Related quantities Equivalent dose HT is used for assessing stochastic health risk due to external radiation fields that penetrate uniformly through the whole body. However it needs further corrections when the field is applied only to part(s) of the body, or non-uniformly to measure the overall stochastic health risk to the body. To enable this a further dose quantity called effective dose must be used to take into account the varying sensitivity of different organs and tissues to radiation. Whilst equivalent dose is used for the stochastic effects of external radiation, a similar approach is used for internal, or committed dose. The ICRP defines an equivalent dose quantity for individual committed dose, which is used to measure the effect of inhaled or ingested radioactive materials. A committed dose from an internal source represents the same effective risk as the same amount of equivalent dose applied uniformly to the whole body from an external source. Committed equivalent dose, H T(t) is the time integral of the equivalent dose rate in a particular tissue or organ that will be received by an individual following intake of radioactive material into the body by a Reference Person, where s is the integration time in years. This refers specifically to the dose in a specific tissue or organ, in the similar way to external equivalent dose. The ICRP states "Radionuclides incorporated in the human body irradiate the tissues over time periods determined by their physical half-life and their biological retention within the body. Thus they may give rise to doses to body tissues for many months or years after the intake. The need to regulate exposures to radionuclides and the accumulation of radiation dose over extended periods of time has led to the definition of committed dose quantities". There is no confusion between equivalent dose and dose equivalent. Indeed, they are same concepts. Although the CIPM definition states that the linear energy transfer function of the ICRU is used in calculating the biological effect, the ICRP in 1990 developed the "protection" dose quantities named effective and equivalent dose, which are calculated from more complex computational models and are distinguished by not having the phrase dose equivalent in their name. Prior to 1990, the ICRP used the term "dose equivalent" to refer to the absorbed dose at a point multiplied by the quality factor at that point, where the quality factor was a function of linear energy transfer (LET). Currently, the ICRP's definition of "equivalent dose" represents an average dose over an organ or tissue, and radiation weighting factors are used instead of quality factors. The phrase dose equivalent is only used for which use Q for calculation, and the following are defined as such by the ICRU and ICRP: In the US there are further differently named dose quantities which are not part of the ICRP system of quantities. The International Committee for Weights and Measures (CIPM) and the US Nuclear Regulatory Commission continue to use the old terminology of quality factors and dose equivalent. The NRC quality factors are independent of linear energy transfer, though not always equal to the ICRP radiation weighting factors. The NRC's definition of dose equivalent is "the product of the absorbed dose in tissue, quality factor, and all other necessary modifying factors at the location of interest." However, it is apparent from their definition of effective dose equivalent that "all other necessary modifying factors" excludes the tissue weighting factor. The radiation weighting factors for neutrons are also different between US NRC and the ICRP - see accompanying diagram. Cumulative equivalent dose due to external whole-body exposure is normally reported to nuclear energy workers in regular dosimetry reports. In the US, three different equivalent doses are typically reported: See also References External links |
======================================== |
[SOURCE: https://techcrunch.com/video/is-your-startups-check-engine-light-on-google-clouds-vp-explains-what-to-do/] | [TOKENS: 742] |
Save up to $680 on your pass with Super Early Bird rates. REGISTER NOW. Save up to $680 on your Disrupt 2026 pass. Ends February 27. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us Is your startup’s check engine light on? Google Cloud’s VP explains what to do Loading the player… Startup founders are being pushed to move faster than ever, using AI while facing tighter funding, rising infrastructure costs, and more pressure to show real traction early. Cloud credits, access to GPUs, and foundation models have made it easier to get started, but those early infrastructure choices can have unforeseen consequences once startups move beyond free credits and into real cloud bills. On this episode of TechCrunch’s Equity podcast, Rebecca Bellan caught up with Darren Mowry, Google Cloud’s vice president of global startups who is right at the center of those tradeoffs. Watch as they discuss what Mowry’s seeing across the startup ecosystem, how Google Cloud is competing for AI startups, and what founders should be thinking about as they scale. Subscribe to Equity on YouTube, Apple Podcasts, Overcast, Spotify and all the casts. You also can follow Equity on X and Threads, at @EquityPod. Topics Audio Producer Theresa Loconsolo is an audio producer at TechCrunch focusing on Equity, the network’s flagship podcast. Before joining TechCrunch in 2022, she was one of 2 producers at a four-station conglomerate where she wrote, recorded, voiced and edited content, and engineered live performances and interviews from guests like lovelytheband. Theresa is based in New Jersey and holds a bachelors degree in Communication from Monmouth University. You can contact or verify outreach from Theresa by emailing theresa.loconsolo@techcrunch.com. You can contact or verify outreach from Theresa by emailing theresa.loconsolo@techcrunch.com. You can contact or verify outreach from Theresa by emailing theresa.loconsolo@techcrunch.com. Save up to $680 on your pass before February 27.Meet investors. Discover your next portfolio company. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building what’s next. Don’t miss these one-time savings. Most Popular FBI says ATM ‘jackpotting’ attacks are on the rise, and netting hackers millions in stolen cash Meta’s own research found parental supervision doesn’t really help curb teens’ compulsive social media use How Ricursive Intelligence raised $335M at a $4B valuation in 4 months After all the hype, some AI experts don’t think OpenClaw is all that exciting OpenClaw creator Peter Steinberger joins OpenAI Hollywood isn’t happy about the new Seedance 2.0 video generator The great computer science exodus (and where students are going instead) Subscribe for the industry’s biggest tech news Every weekday and Sunday, you can get the best of TechCrunch’s coverage. TechCrunch's AI experts cover the latest news in the fast-moving field. Every Monday, gets you up to speed on the latest advances in aerospace. Startups are the core of TechCrunch, so get our best coverage delivered weekly. By submitting your email, you agree to our Terms and Privacy Notice. Related © 2025 TechCrunch Media LLC. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Mars#cite_note-surface-radiation-20] | [TOKENS: 11899] |
Contents Mars Mars is the fourth planet from the Sun. It is also known as the "Red Planet", for its orange-red appearance. Mars is a desert-like rocky planet with a tenuous atmosphere that is primarily carbon dioxide (CO2). At the average surface level the atmospheric pressure is a few thousandths of Earth's, atmospheric temperature ranges from −153 to 20 °C (−243 to 68 °F), and cosmic radiation is high. Mars retains some water, in the ground as well as thinly in the atmosphere, forming cirrus clouds, fog, frost, larger polar regions of permafrost and ice caps (with seasonal CO2 snow), but no bodies of liquid surface water. Its surface gravity is roughly a third of Earth's or double that of the Moon. Its diameter, 6,779 km (4,212 mi), is about half the Earth's, or twice the Moon's, and its surface area is the size of all the dry land of Earth. Fine dust is prevalent across the surface and the atmosphere, being picked up and spread at the low Martian gravity even by the weak wind of the tenuous atmosphere. The terrain of Mars roughly follows a north-south divide, the Martian dichotomy, with the northern hemisphere mainly consisting of relatively flat, low lying plains, and the southern hemisphere of cratered highlands. Geologically, the planet is fairly active with marsquakes trembling underneath the ground, but also hosts many enormous volcanoes that are extinct (the tallest is Olympus Mons, 21.9 km or 13.6 mi tall), as well as one of the largest canyons in the Solar System (Valles Marineris, 4,000 km or 2,500 mi long). Mars has two natural satellites that are small and irregular in shape: Phobos and Deimos. With a significant axial tilt of 25 degrees, Mars experiences seasons, like Earth (which has an axial tilt of 23.5 degrees). A Martian solar year is equal to 1.88 Earth years (687 Earth days), a Martian solar day (sol) is equal to 24.6 hours. Mars formed along with the other planets approximately 4.5 billion years ago. During the martian Noachian period (4.5 to 3.5 billion years ago), its surface was marked by meteor impacts, valley formation, erosion, the possible presence of water oceans and the loss of its magnetosphere. The Hesperian period (beginning 3.5 billion years ago and ending 3.3–2.9 billion years ago) was dominated by widespread volcanic activity and flooding that carved immense outflow channels. The Amazonian period, which continues to the present, is the currently dominating and remaining influence on geological processes. Because of Mars's geological history, the possibility of past or present life on Mars remains an area of active scientific investigation, with some possible traces needing further examination. Being visible with the naked eye in Earth's sky as a red wandering star, Mars has been observed throughout history, acquiring diverse associations in different cultures. In 1963 the first flight to Mars took place with Mars 1, but communication was lost en route. The first successful flyby exploration of Mars was conducted in 1965 with Mariner 4. In 1971 Mariner 9 entered orbit around Mars, being the first spacecraft to orbit any body other than the Moon, Sun or Earth; following in the same year were the first uncontrolled impact (Mars 2) and first successful landing (Mars 3) on Mars. Probes have been active on Mars continuously since 1997. At times, more than ten probes have simultaneously operated in orbit or on the surface, more than at any other planet beyond Earth. Mars is an often proposed target for future crewed exploration missions, though no such mission is currently planned. Natural history Scientists have theorized that during the Solar System's formation, Mars was created as the result of a random process of run-away accretion of material from the protoplanetary disk that orbited the Sun. Mars has many distinctive chemical features caused by its position in the Solar System. Elements with comparatively low boiling points, such as chlorine, phosphorus, and sulfur, are much more common on Mars than on Earth; these elements were probably pushed outward by the young Sun's energetic solar wind. After the formation of the planets, the inner Solar System may have been subjected to the so-called Late Heavy Bombardment. About 60% of the surface of Mars shows a record of impacts from that era, whereas much of the remaining surface is probably underlain by immense impact basins caused by those events. However, more recent modeling has disputed the existence of the Late Heavy Bombardment. There is evidence of an enormous impact basin in the Northern Hemisphere of Mars, spanning 10,600 by 8,500 kilometres (6,600 by 5,300 mi), or roughly four times the size of the Moon's South Pole–Aitken basin, which would be the largest impact basin yet discovered if confirmed. It has been hypothesized that the basin was formed when Mars was struck by a Pluto-sized body about four billion years ago. The event, thought to be the cause of the Martian hemispheric dichotomy, created the smooth Borealis basin that covers 40% of the planet. A 2023 study shows evidence, based on the orbital inclination of Deimos (a small moon of Mars), that Mars may once have had a ring system 3.5 billion years to 4 billion years ago. This ring system may have been formed from a moon, 20 times more massive than Phobos, orbiting Mars billions of years ago; and Phobos would be a remnant of that ring. Epochs: The geological history of Mars can be split into many periods, but the following are the three primary periods: Geological activity is still taking place on Mars. The Athabasca Valles is home to sheet-like lava flows created about 200 million years ago. Water flows in the grabens called the Cerberus Fossae occurred less than 20 million years ago, indicating equally recent volcanic intrusions. The Mars Reconnaissance Orbiter has captured images of avalanches. Physical characteristics Mars is approximately half the diameter of Earth or twice that of the Moon, with a surface area only slightly less than the total area of Earth's dry land. Mars is less dense than Earth, having about 15% of Earth's volume and 11% of Earth's mass, resulting in about 38% of Earth's surface gravity. Mars is the only presently known example of a desert planet, a rocky planet with a surface akin to that of Earth's deserts. The red-orange appearance of the Martian surface is caused by iron(III) oxide (nanophase Fe2O3) and the iron(III) oxide-hydroxide mineral goethite. It can look like butterscotch; other common surface colors include golden, brown, tan, and greenish, depending on the minerals present. Like Earth, Mars is differentiated into a dense metallic core overlaid by less dense rocky layers. The outermost layer is the crust, which is on average about 42–56 kilometres (26–35 mi) thick, with a minimum thickness of 6 kilometres (3.7 mi) in Isidis Planitia, and a maximum thickness of 117 kilometres (73 mi) in the southern Tharsis plateau. For comparison, Earth's crust averages 27.3 ± 4.8 km in thickness. The most abundant elements in the Martian crust are silicon, oxygen, iron, magnesium, aluminum, calcium, and potassium. Mars is confirmed to be seismically active; in 2019, it was reported that InSight had detected and recorded over 450 marsquakes and related events. Beneath the crust is a silicate mantle responsible for many of the tectonic and volcanic features on the planet's surface. The upper Martian mantle is a low-velocity zone, where the velocity of seismic waves is lower than surrounding depth intervals. The mantle appears to be rigid down to the depth of about 250 km, giving Mars a very thick lithosphere compared to Earth. Below this the mantle gradually becomes more ductile, and the seismic wave velocity starts to grow again. The Martian mantle does not appear to have a thermally insulating layer analogous to Earth's lower mantle; instead, below 1050 km in depth, it becomes mineralogically similar to Earth's transition zone. At the bottom of the mantle lies a basal liquid silicate layer approximately 150–180 km thick. The Martian mantle appears to be highly heterogenous, with dense fragments up to 4 km across, likely injected deep into the planet by colossal impacts ~4.5 billion years ago; high-frequency waves from eight marsquakes slowed as they passed these localized regions, and modeling indicates the heterogeneities are compositionally distinct debris preserved because Mars lacks plate tectonics and has a sluggishly convecting interior that prevents complete homogenization. Mars's iron and nickel core is at least partially molten, and may have a solid inner core. It is around half of Mars's radius, approximately 1650–1675 km, and is enriched in light elements such as sulfur, oxygen, carbon, and hydrogen. The temperature of the core is estimated to be 2000–2400 K, compared to 5400–6230 K for Earth's solid inner core. In 2025, based on data from the InSight lander, a group of researchers reported the detection of a solid inner core 613 kilometres (381 mi) ± 67 kilometres (42 mi) in radius. Mars is a terrestrial planet with a surface that consists of minerals containing silicon and oxygen, metals, and other elements that typically make up rock. The Martian surface is primarily composed of tholeiitic basalt, although parts are more silica-rich than typical basalt and may be similar to andesitic rocks on Earth, or silica glass. Regions of low albedo suggest concentrations of plagioclase feldspar, with northern low albedo regions displaying higher than normal concentrations of sheet silicates and high-silicon glass. Parts of the southern highlands include detectable amounts of high-calcium pyroxenes. Localized concentrations of hematite and olivine have been found. Much of the surface is deeply covered by finely grained iron(III) oxide dust. The Phoenix lander returned data showing Martian soil to be slightly alkaline and containing elements such as magnesium, sodium, potassium and chlorine. These nutrients are found in soils on Earth, and are necessary for plant growth. Experiments performed by the lander showed that the Martian soil has a basic pH of 7.7, and contains 0.6% perchlorate by weight, concentrations that are toxic to humans. Streaks are common across Mars and new ones appear frequently on steep slopes of craters, troughs, and valleys. The streaks are dark at first and get lighter with age. The streaks can start in a tiny area, then spread out for hundreds of metres. They have been seen to follow the edges of boulders and other obstacles in their path. The commonly accepted hypotheses include that they are dark underlying layers of soil revealed after avalanches of bright dust or dust devils. Several other explanations have been put forward, including those that involve water or even the growth of organisms. Environmental radiation levels on the surface are on average 0.64 millisieverts of radiation per day, and significantly less than the radiation of 1.84 millisieverts per day or 22 millirads per day during the flight to and from Mars. For comparison the radiation levels in low Earth orbit, where Earth's space stations orbit, are around 0.5 millisieverts of radiation per day. Hellas Planitia has the lowest surface radiation at about 0.342 millisieverts per day, featuring lava tubes southwest of Hadriacus Mons with potentially levels as low as 0.064 millisieverts per day, comparable to radiation levels during flights on Earth. Although Mars has no evidence of a structured global magnetic field, observations show that parts of the planet's crust have been magnetized, suggesting that alternating polarity reversals of its dipole field have occurred in the past. This paleomagnetism of magnetically susceptible minerals is similar to the alternating bands found on Earth's ocean floors. One hypothesis, published in 1999 and re-examined in October 2005 (with the help of the Mars Global Surveyor), is that these bands suggest plate tectonic activity on Mars four billion years ago, before the planetary dynamo ceased to function and the planet's magnetic field faded. Geography and features Although better remembered for mapping the Moon, Johann Heinrich von Mädler and Wilhelm Beer were the first areographers. They began by establishing that most of Mars's surface features were permanent and by more precisely determining the planet's rotation period. In 1840, Mädler combined ten years of observations and drew the first map of Mars. Features on Mars are named from a variety of sources. Albedo features are named for classical mythology. Craters larger than roughly 50 km are named for deceased scientists and writers and others who have contributed to the study of Mars. Smaller craters are named for towns and villages of the world with populations of less than 100,000. Large valleys are named for the word "Mars" or "star" in various languages; smaller valleys are named for rivers. Large albedo features retain many of the older names but are often updated to reflect new knowledge of the nature of the features. For example, Nix Olympica (the snows of Olympus) has become Olympus Mons (Mount Olympus). The surface of Mars as seen from Earth is divided into two kinds of areas, with differing albedo. The paler plains covered with dust and sand rich in reddish iron oxides were once thought of as Martian "continents" and given names like Arabia Terra (land of Arabia) or Amazonis Planitia (Amazonian plain). The dark features were thought to be seas, hence their names Mare Erythraeum, Mare Sirenum and Aurorae Sinus. The largest dark feature seen from Earth is Syrtis Major Planum. The permanent northern polar ice cap is named Planum Boreum. The southern cap is called Planum Australe. Mars's equator is defined by its rotation, but the location of its Prime Meridian was specified, as was Earth's (at Greenwich), by choice of an arbitrary point; Mädler and Beer selected a line for their first maps of Mars in 1830. After the spacecraft Mariner 9 provided extensive imagery of Mars in 1972, a small crater (later called Airy-0), located in the Sinus Meridiani ("Middle Bay" or "Meridian Bay"), was chosen by Merton E. Davies, Harold Masursky, and Gérard de Vaucouleurs for the definition of 0.0° longitude to coincide with the original selection. Because Mars has no oceans, and hence no "sea level", a zero-elevation surface had to be selected as a reference level; this is called the areoid of Mars, analogous to the terrestrial geoid. Zero altitude was defined by the height at which there is 610.5 Pa (6.105 mbar) of atmospheric pressure. This pressure corresponds to the triple point of water, and it is about 0.6% of the sea level surface pressure on Earth (0.006 atm). For mapping purposes, the United States Geological Survey divides the surface of Mars into thirty cartographic quadrangles, each named for a classical albedo feature it contains. In April 2023, The New York Times reported an updated global map of Mars based on images from the Hope spacecraft. A related, but much more detailed, global Mars map was released by NASA on 16 April 2023. The vast upland region Tharsis contains several massive volcanoes, which include the shield volcano Olympus Mons. The edifice is over 600 km (370 mi) wide. Because the mountain is so large, with complex structure at its edges, giving a definite height to it is difficult. Its local relief, from the foot of the cliffs which form its northwest margin to its peak, is over 21 km (13 mi), a little over twice the height of Mauna Kea as measured from its base on the ocean floor. The total elevation change from the plains of Amazonis Planitia, over 1,000 km (620 mi) to the northwest, to the summit approaches 26 km (16 mi), roughly three times the height of Mount Everest, which in comparison stands at just over 8.8 kilometres (5.5 mi). Consequently, Olympus Mons is either the tallest or second-tallest mountain in the Solar System; the only known mountain which might be taller is the Rheasilvia peak on the asteroid Vesta, at 20–25 km (12–16 mi). The dichotomy of Martian topography is striking: northern plains flattened by lava flows contrast with the southern highlands, pitted and cratered by ancient impacts. It is possible that, four billion years ago, the Northern Hemisphere of Mars was struck by an object one-tenth to two-thirds the size of Earth's Moon. If this is the case, the Northern Hemisphere of Mars would be the site of an impact crater 10,600 by 8,500 kilometres (6,600 by 5,300 mi) in size, or roughly the area of Europe, Asia, and Australia combined, surpassing Utopia Planitia and the Moon's South Pole–Aitken basin as the largest impact crater in the Solar System. Mars is scarred by 43,000 impact craters with a diameter of 5 kilometres (3.1 mi) or greater. The largest exposed crater is Hellas, which is 2,300 kilometres (1,400 mi) wide and 7,000 metres (23,000 ft) deep, and is a light albedo feature clearly visible from Earth. There are other notable impact features, such as Argyre, which is around 1,800 kilometres (1,100 mi) in diameter, and Isidis, which is around 1,500 kilometres (930 mi) in diameter. Due to the smaller mass and size of Mars, the probability of an object colliding with the planet is about half that of Earth. Mars is located closer to the asteroid belt, so it has an increased chance of being struck by materials from that source. Mars is more likely to be struck by short-period comets, i.e., those that lie within the orbit of Jupiter. Martian craters can[discuss] have a morphology that suggests the ground became wet after the meteor impact. The large canyon, Valles Marineris (Latin for 'Mariner Valleys, also known as Agathodaemon in the old canal maps), has a length of 4,000 kilometres (2,500 mi) and a depth of up to 7 kilometres (4.3 mi). The length of Valles Marineris is equivalent to the length of Europe and extends across one-fifth the circumference of Mars. By comparison, the Grand Canyon on Earth is only 446 kilometres (277 mi) long and nearly 2 kilometres (1.2 mi) deep. Valles Marineris was formed due to the swelling of the Tharsis area, which caused the crust in the area of Valles Marineris to collapse. In 2012, it was proposed that Valles Marineris is not just a graben, but a plate boundary where 150 kilometres (93 mi) of transverse motion has occurred, making Mars a planet with possibly a two-tectonic plate arrangement. Images from the Thermal Emission Imaging System (THEMIS) aboard NASA's Mars Odyssey orbiter have revealed seven possible cave entrances on the flanks of the volcano Arsia Mons. The caves, named after loved ones of their discoverers, are collectively known as the "seven sisters". Cave entrances measure from 100 to 252 metres (328 to 827 ft) wide and they are estimated to be at least 73 to 96 metres (240 to 315 ft) deep. Because light does not reach the floor of most of the caves, they may extend much deeper than these lower estimates and widen below the surface. "Dena" is the only exception; its floor is visible and was measured to be 130 metres (430 ft) deep. The interiors of these caverns may be protected from micrometeoroids, UV radiation, solar flares and high energy particles that bombard the planet's surface. Martian geysers (or CO2 jets) are putative sites of small gas and dust eruptions that occur in the south polar region of Mars during the spring thaw. "Dark dune spots" and "spiders" – or araneiforms – are the two most visible types of features ascribed to these eruptions. Similarly sized dust will settle from the thinner Martian atmosphere sooner than it would on Earth. For example, the dust suspended by the 2001 global dust storms on Mars only remained in the Martian atmosphere for 0.6 years, while the dust from Mount Pinatubo took about two years to settle. However, under current Martian conditions, the mass movements involved are generally much smaller than on Earth. Even the 2001 global dust storms on Mars moved only the equivalent of a very thin dust layer – about 3 μm thick if deposited with uniform thickness between 58° north and south of the equator. Dust deposition at the two rover sites has proceeded at a rate of about the thickness of a grain every 100 sols. Atmosphere Mars lost its magnetosphere 4 billion years ago, possibly because of numerous asteroid strikes, so the solar wind interacts directly with the Martian ionosphere, lowering the atmospheric density by stripping away atoms from the outer layer. Both Mars Global Surveyor and Mars Express have detected ionized atmospheric particles trailing off into space behind Mars, and this atmospheric loss is being studied by the MAVEN orbiter. Compared to Earth, the atmosphere of Mars is quite rarefied. Atmospheric pressure on the surface today ranges from a low of 30 Pa (0.0044 psi) on Olympus Mons to over 1,155 Pa (0.1675 psi) in Hellas Planitia, with a mean pressure at the surface level of 600 Pa (0.087 psi). The highest atmospheric density on Mars is equal to that found 35 kilometres (22 mi) above Earth's surface. The resulting mean surface pressure is only 0.6% of Earth's 101.3 kPa (14.69 psi). The scale height of the atmosphere is about 10.8 kilometres (6.7 mi), which is higher than Earth's 6 kilometres (3.7 mi), because the surface gravity of Mars is only about 38% of Earth's. The atmosphere of Mars consists of about 96% carbon dioxide, 1.93% argon and 1.89% nitrogen along with traces of oxygen and water. The atmosphere is quite dusty, containing particulates about 1.5 μm in diameter which give the Martian sky a tawny color when seen from the surface. It may take on a pink hue due to iron oxide particles suspended in it. Despite repeated detections of methane on Mars, there is no scientific consensus as to its origin. One suggestion is that methane exists on Mars and that its concentration fluctuates seasonally. The existence of methane could be produced by non-biological process such as serpentinization involving water, carbon dioxide, and the mineral olivine, which is known to be common on Mars, or by Martian life. Compared to Earth, its higher concentration of atmospheric CO2 and lower surface pressure may be why sound is attenuated more on Mars, where natural sources are rare apart from the wind. Using acoustic recordings collected by the Perseverance rover, researchers concluded that the speed of sound there is approximately 240 m/s for frequencies below 240 Hz, and 250 m/s for those above. Auroras have been detected on Mars. Because Mars lacks a global magnetic field, the types and distribution of auroras there differ from those on Earth; rather than being mostly restricted to polar regions as is the case on Earth, a Martian aurora can encompass the planet. In September 2017, NASA reported radiation levels on the surface of the planet Mars were temporarily doubled, and were associated with an aurora 25 times brighter than any observed earlier, due to a massive, and unexpected, solar storm in the middle of the month. Mars has seasons, alternating between its northern and southern hemispheres, similar to on Earth. Additionally the orbit of Mars has, compared to Earth's, a large eccentricity and approaches perihelion when it is summer in its southern hemisphere and winter in its northern, and aphelion when it is winter in its southern hemisphere and summer in its northern. As a result, the seasons in its southern hemisphere are more extreme and the seasons in its northern are milder than would otherwise be the case. The summer temperatures in the south can be warmer than the equivalent summer temperatures in the north by up to 30 °C (54 °F). Martian surface temperatures vary from lows of about −110 °C (−166 °F) to highs of up to 35 °C (95 °F) in equatorial summer. The wide range in temperatures is due to the thin atmosphere which cannot store much solar heat, the low atmospheric pressure (about 1% that of the atmosphere of Earth), and the low thermal inertia of Martian soil. The planet is 1.52 times as far from the Sun as Earth, resulting in just 43% of the amount of sunlight. Mars has the largest dust storms in the Solar System, reaching speeds of over 160 km/h (100 mph). These can vary from a storm over a small area, to gigantic storms that cover the entire planet. They tend to occur when Mars is closest to the Sun, and have been shown to increase global temperature. Seasons also produce dry ice covering polar ice caps. Hydrology While Mars contains water in larger amounts, most of it is dust covered water ice at the Martian polar ice caps. The volume of water ice in the south polar ice cap, if melted, would be enough to cover most of the surface of the planet with a depth of 11 metres (36 ft). Water in its liquid form cannot persist on the surface due to Mars's low atmospheric pressure, which is less than 1% that of Earth. Only at the lowest of elevations are the pressure and temperature high enough for liquid water to exist for short periods. Although little water is present in the atmosphere, there is enough to produce clouds of water ice and different cases of snow and frost, often mixed with snow of carbon dioxide dry ice. Landforms visible on Mars strongly suggest that liquid water has existed on the planet's surface. Huge linear swathes of scoured ground, known as outflow channels, cut across the surface in about 25 places. These are thought to be a record of erosion caused by the catastrophic release of water from subsurface aquifers, though some of these structures have been hypothesized to result from the action of glaciers or lava. One of the larger examples, Ma'adim Vallis, is 700 kilometres (430 mi) long, much greater than the Grand Canyon, with a width of 20 kilometres (12 mi) and a depth of 2 kilometres (1.2 mi) in places. It is thought to have been carved by flowing water early in Mars's history. The youngest of these channels is thought to have formed only a few million years ago. Elsewhere, particularly on the oldest areas of the Martian surface, finer-scale, dendritic networks of valleys are spread across significant proportions of the landscape. Features of these valleys and their distribution strongly imply that they were carved by runoff resulting from precipitation in early Mars history. Subsurface water flow and groundwater sapping may play important subsidiary roles in some networks, but precipitation was probably the root cause of the incision in almost all cases. Along craters and canyon walls, there are thousands of features that appear similar to terrestrial gullies. The gullies tend to be in the highlands of the Southern Hemisphere and face the Equator; all are poleward of 30° latitude. A number of authors have suggested that their formation process involves liquid water, probably from melting ice, although others have argued for formation mechanisms involving carbon dioxide frost or the movement of dry dust. No partially degraded gullies have formed by weathering and no superimposed impact craters have been observed, indicating that these are young features, possibly still active. Other geological features, such as deltas and alluvial fans preserved in craters, are further evidence for warmer, wetter conditions at an interval or intervals in earlier Mars history. Such conditions necessarily require the widespread presence of crater lakes across a large proportion of the surface, for which there is independent mineralogical, sedimentological and geomorphological evidence. Further evidence that liquid water once existed on the surface of Mars comes from the detection of specific minerals such as hematite and goethite, both of which sometimes form in the presence of water. The chemical signature of water vapor on Mars was first unequivocally demonstrated in 1963 by spectroscopy using an Earth-based telescope. In 2004, Opportunity detected the mineral jarosite. This forms only in the presence of acidic water, showing that water once existed on Mars. The Spirit rover found concentrated deposits of silica in 2007 that indicated wet conditions in the past, and in December 2011, the mineral gypsum, which also forms in the presence of water, was found on the surface by NASA's Mars rover Opportunity. It is estimated that the amount of water in the upper mantle of Mars, represented by hydroxyl ions contained within Martian minerals, is equal to or greater than that of Earth at 50–300 parts per million of water, which is enough to cover the entire planet to a depth of 200–1,000 metres (660–3,280 ft). On 18 March 2013, NASA reported evidence from instruments on the Curiosity rover of mineral hydration, likely hydrated calcium sulfate, in several rock samples including the broken fragments of "Tintina" rock and "Sutton Inlier" rock as well as in veins and nodules in other rocks like "Knorr" rock and "Wernicke" rock. Analysis using the rover's DAN instrument provided evidence of subsurface water, amounting to as much as 4% water content, down to a depth of 60 centimetres (24 in), during the rover's traverse from the Bradbury Landing site to the Yellowknife Bay area in the Glenelg terrain. In September 2015, NASA announced that they had found strong evidence of hydrated brine flows in recurring slope lineae, based on spectrometer readings of the darkened areas of slopes. These streaks flow downhill in Martian summer, when the temperature is above −23 °C, and freeze at lower temperatures. These observations supported earlier hypotheses, based on timing of formation and their rate of growth, that these dark streaks resulted from water flowing just below the surface. However, later work suggested that the lineae may be dry, granular flows instead, with at most a limited role for water in initiating the process. A definitive conclusion about the presence, extent, and role of liquid water on the Martian surface remains elusive. Researchers suspect much of the low northern plains of the planet were covered with an ocean hundreds of meters deep, though this theory remains controversial. In March 2015, scientists stated that such an ocean might have been the size of Earth's Arctic Ocean. This finding was derived from the ratio of protium to deuterium in the modern Martian atmosphere compared to that ratio on Earth. The amount of Martian deuterium (D/H = 9.3 ± 1.7 10−4) is five to seven times the amount on Earth (D/H = 1.56 10−4), suggesting that ancient Mars had significantly higher levels of water. Results from the Curiosity rover had previously found a high ratio of deuterium in Gale Crater, though not significantly high enough to suggest the former presence of an ocean. Other scientists caution that these results have not been confirmed, and point out that Martian climate models have not yet shown that the planet was warm enough in the past to support bodies of liquid water. Near the northern polar cap is the 81.4 kilometres (50.6 mi) wide Korolev Crater, which the Mars Express orbiter found to be filled with approximately 2,200 cubic kilometres (530 cu mi) of water ice. In November 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region. The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior (which is 12,100 cubic kilometers). During observations from 2018 through 2021, the ExoMars Trace Gas Orbiter spotted indications of water, probably subsurface ice, in the Valles Marineris canyon system. Orbital motion Mars's average distance from the Sun is roughly 230 million km (143 million mi), and its orbital period is 687 (Earth) days. The solar day (or sol) on Mars is only slightly longer than an Earth day: 24 hours, 39 minutes, and 35.244 seconds. A Martian year is equal to 1.8809 Earth years, or 1 year, 320 days, and 18.2 hours. The gravitational potential difference and thus the delta-v needed to transfer between Mars and Earth is the second lowest for Earth. The axial tilt of Mars is 25.19° relative to its orbital plane, which is similar to the axial tilt of Earth. As a result, Mars has seasons like Earth, though on Mars they are nearly twice as long because its orbital period is that much longer. In the present day, the orientation of the north pole of Mars is close to the star Deneb. Mars has a relatively pronounced orbital eccentricity of about 0.09; of the seven other planets in the Solar System, only Mercury has a larger orbital eccentricity. It is known that in the past, Mars has had a much more circular orbit. At one point, 1.35 million Earth years ago, Mars had an eccentricity of roughly 0.002, much less than that of Earth today. Mars's cycle of eccentricity is 96,000 Earth years compared to Earth's cycle of 100,000 years. Mars has its closest approach to Earth (opposition) in a synodic period of 779.94 days. It should not be confused with Mars conjunction, where the Earth and Mars are at opposite sides of the Solar System and form a straight line crossing the Sun. The average time between the successive oppositions of Mars, its synodic period, is 780 days; but the number of days between successive oppositions can range from 764 to 812. The distance at close approach varies between about 54 and 103 million km (34 and 64 million mi) due to the planets' elliptical orbits, which causes comparable variation in angular size. At their furthest Mars and Earth can be as far as 401 million km (249 million mi) apart. Mars comes into opposition from Earth every 2.1 years. The planets come into opposition near Mars's perihelion in 2003, 2018 and 2035, with the 2020 and 2033 events being particularly close to perihelic opposition. The mean apparent magnitude of Mars is +0.71 with a standard deviation of 1.05. Because the orbit of Mars is eccentric, the magnitude at opposition from the Sun can range from about −3.0 to −1.4. The minimum brightness is magnitude +1.86 when the planet is near aphelion and in conjunction with the Sun. At its brightest, Mars (along with Jupiter) is second only to Venus in apparent brightness. Mars usually appears distinctly yellow, orange, or red. When farthest away from Earth, it is more than seven times farther away than when it is closest. Mars is usually close enough for particularly good viewing once or twice at 15-year or 17-year intervals. Optical ground-based telescopes are typically limited to resolving features about 300 kilometres (190 mi) across when Earth and Mars are closest because of Earth's atmosphere. As Mars approaches opposition, it begins a period of retrograde motion, which means it will appear to move backwards in a looping curve with respect to the background stars. This retrograde motion lasts for about 72 days, and Mars reaches its peak apparent brightness in the middle of this interval. Moons Mars has two relatively small (compared to Earth's) natural moons, Phobos (about 22 km (14 mi) in diameter) and Deimos (about 12 km (7.5 mi) in diameter), which orbit at 9,376 km (5,826 mi) and 23,460 km (14,580 mi) around the planet. The origin of both moons is unclear, although a popular theory states that they were asteroids captured into Martian orbit. Both satellites were discovered in 1877 by Asaph Hall and were named after the characters Phobos (the deity of panic and fear) and Deimos (the deity of terror and dread), twins from Greek mythology who accompanied their father Ares, god of war, into battle. Mars was the Roman equivalent to Ares. In modern Greek, the planet retains its ancient name Ares (Aris: Άρης). From the surface of Mars, the motions of Phobos and Deimos appear different from that of the Earth's satellite, the Moon. Phobos rises in the west, sets in the east, and rises again in just 11 hours. Deimos, being only just outside synchronous orbit – where the orbital period would match the planet's period of rotation – rises as expected in the east, but slowly. Because the orbit of Phobos is below a synchronous altitude, tidal forces from Mars are gradually lowering its orbit. In about 50 million years, it could either crash into Mars's surface or break up into a ring structure around the planet. The origin of the two satellites is not well understood. Their low albedo and carbonaceous chondrite composition have been regarded as similar to asteroids, supporting a capture theory. The unstable orbit of Phobos would seem to point toward a relatively recent capture. But both have circular orbits near the equator, which is unusual for captured objects, and the required capture dynamics are complex. Accretion early in the history of Mars is plausible, but would not account for a composition resembling asteroids rather than Mars itself, if that is confirmed. Mars may have yet-undiscovered moons, smaller than 50 to 100 metres (160 to 330 ft) in diameter, and a dust ring is predicted to exist between Phobos and Deimos. A third possibility for their origin as satellites of Mars is the involvement of a third body or a type of impact disruption. More-recent lines of evidence for Phobos having a highly porous interior, and suggesting a composition containing mainly phyllosilicates and other minerals known from Mars, point toward an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's satellite. Although the visible and near-infrared (VNIR) spectra of the moons of Mars resemble those of outer-belt asteroids, the thermal infrared spectra of Phobos are reported to be inconsistent with chondrites of any class. It is also possible that Phobos and Deimos were fragments of an older moon, formed by debris from a large impact on Mars, and then destroyed by a more recent impact upon the satellite. More recently, a study conducted by a team of researchers from multiple countries suggests that a lost moon, at least fifteen times the size of Phobos, may have existed in the past. By analyzing rocks which point to tidal processes on the planet, it is possible that these tides may have been regulated by a past moon. Human observations and exploration The history of observations of Mars is marked by oppositions of Mars when the planet is closest to Earth and hence is most easily visible, which occur every couple of years. Even more notable are the perihelic oppositions of Mars, which are distinguished because Mars is close to perihelion, making it even closer to Earth. The ancient Sumerians named Mars Nergal, the god of war and plague. During Sumerian times, Nergal was a minor deity of little significance, but, during later times, his main cult center was the city of Nineveh. In Mesopotamian texts, Mars is referred to as the "star of judgement of the fate of the dead". The existence of Mars as a wandering object in the night sky was also recorded by the ancient Egyptian astronomers and, by 1534 BCE, they were familiar with the retrograde motion of the planet. By the period of the Neo-Babylonian Empire, the Babylonian astronomers were making regular records of the positions of the planets and systematic observations of their behavior. For Mars, they knew that the planet made 37 synodic periods, or 42 circuits of the zodiac, every 79 years. They invented arithmetic methods for making minor corrections to the predicted positions of the planets. In Ancient Greece, the planet was known as Πυρόεις. Commonly, the Greek name for the planet now referred to as Mars, was Ares. It was the Romans who named the planet Mars, for their god of war, often represented by the sword and shield of the planet's namesake. In the fourth century BCE, Aristotle noted that Mars disappeared behind the Moon during an occultation, indicating that the planet was farther away. Ptolemy, a Greek living in Alexandria, attempted to address the problem of the orbital motion of Mars. Ptolemy's model and his collective work on astronomy was presented in the multi-volume collection later called the Almagest (from the Arabic for "greatest"), which became the authoritative treatise on Western astronomy for the next fourteen centuries. Literature from ancient China confirms that Mars was known by Chinese astronomers by no later than the fourth century BCE. In the East Asian cultures, Mars is traditionally referred to as the "fire star" (火星) based on the Wuxing system. In 1609 Johannes Kepler published a 10 year study of Martian orbit, using the diurnal parallax of Mars, measured by Tycho Brahe, to make a preliminary calculation of the relative distance to the planet. From Brahe's observations of Mars, Kepler deduced that the planet orbited the Sun not in a circle, but in an ellipse. Moreover, Kepler showed that Mars sped up as it approached the Sun and slowed down as it moved farther away, in a manner that later physicists would explain as a consequence of the conservation of angular momentum.: 433–437 In 1610 the first use of a telescope for astronomical observation, including Mars, was performed by Italian astronomer Galileo Galilei. With the telescope the diurnal parallax of Mars was again measured in an effort to determine the Sun-Earth distance. This was first performed by Giovanni Domenico Cassini in 1672. The early parallax measurements were hampered by the quality of the instruments. The only occultation of Mars by Venus observed was that of 13 October 1590, seen by Michael Maestlin at Heidelberg. By the 19th century, the resolution of telescopes reached a level sufficient for surface features to be identified. On 5 September 1877, a perihelic opposition to Mars occurred. The Italian astronomer Giovanni Schiaparelli used a 22-centimetre (8.7 in) telescope in Milan to help produce the first detailed map of Mars. These maps notably contained features he called canali, which, with the possible exception of the natural canyon Valles Marineris, were later shown to be an optical illusion. These canali were supposedly long, straight lines on the surface of Mars, to which he gave names of famous rivers on Earth. His term, which means "channels" or "grooves", was popularly mistranslated in English as "canals". Influenced by the observations, the orientalist Percival Lowell founded an observatory which had 30- and 45-centimetre (12- and 18-in) telescopes. The observatory was used for the exploration of Mars during the last good opportunity in 1894, and the following less favorable oppositions. He published several books on Mars and life on the planet, which had a great influence on the public. The canali were independently observed by other astronomers, like Henri Joseph Perrotin and Louis Thollon in Nice, using one of the largest telescopes of that time. The seasonal changes (consisting of the diminishing of the polar caps and the dark areas formed during Martian summers) in combination with the canals led to speculation about life on Mars, and it was a long-held belief that Mars contained vast seas and vegetation. As bigger telescopes were used, fewer long, straight canali were observed. During observations in 1909 by Antoniadi with an 84-centimetre (33 in) telescope, irregular patterns were observed, but no canali were seen. The first spacecraft from Earth to visit Mars was Mars 1 of the Soviet Union, which flew by in 1963, but contact was lost en route. NASA's Mariner 4 followed and became the first spacecraft to successfully transmit from Mars; launched on 28 November 1964, it made its closest approach to the planet on 15 July 1965. Mariner 4 detected the weak Martian radiation belt, measured at about 0.1% that of Earth, and captured the first images of another planet from deep space. Once spacecraft visited the planet during the 1960s and 1970s, many previous concepts of Mars were radically broken. After the results of the Viking life-detection experiments, the hypothesis of a dead planet was generally accepted. The data from Mariner 9 and Viking allowed better maps of Mars to be made. Until 1997 and after Viking 1 shut down in 1982, Mars was only visited by three unsuccessful probes, two flying past without contact (Phobos 1, 1988; Mars Observer, 1993), and one (Phobos 2 1989) malfunctioning in orbit before reaching its destination Phobos. In 1997 Mars Pathfinder became the first successful rover mission beyond the Moon and started together with Mars Global Surveyor (operated until late 2006) an uninterrupted active robotic presence at Mars that has lasted until today. It produced complete, extremely detailed maps of the Martian topography, magnetic field and surface minerals. Starting with these missions a range of new improved crewless spacecraft, including orbiters, landers, and rovers, have been sent to Mars, with successful missions by the NASA (United States), Jaxa (Japan), ESA, United Kingdom, ISRO (India), Roscosmos (Russia), the United Arab Emirates, and CNSA (China) to study the planet's surface, climate, and geology, uncovering the different elements of the history and dynamic of the hydrosphere of Mars and possible traces of ancient life. As of 2023[update], Mars is host to ten functioning spacecraft. Eight are in orbit: 2001 Mars Odyssey, Mars Express, Mars Reconnaissance Orbiter, MAVEN, ExoMars Trace Gas Orbiter, the Hope orbiter, and the Tianwen-1 orbiter. Another two are on the surface: the Mars Science Laboratory Curiosity rover and the Perseverance rover. Collected maps are available online at websites including Google Mars. NASA provides two online tools: Mars Trek, which provides visualizations of the planet using data from 50 years of exploration, and Experience Curiosity, which simulates traveling on Mars in 3-D with Curiosity. Planned missions to Mars include: As of February 2024[update], debris from these types of missions has reached over seven tons. Most of it consists of crashed and inactive spacecraft as well as discarded components. In April 2024, NASA selected several companies to begin studies on providing commercial services to further enable robotic science on Mars. Key areas include establishing telecommunications, payload delivery and surface imaging. Habitability and habitation During the late 19th century, it was widely accepted in the astronomical community that Mars had life-supporting qualities, including the presence of oxygen and water. However, in 1894 W. W. Campbell at Lick Observatory observed the planet and found that "if water vapor or oxygen occur in the atmosphere of Mars it is in quantities too small to be detected by spectroscopes then available". That observation contradicted many of the measurements of the time and was not widely accepted. Campbell and V. M. Slipher repeated the study in 1909 using better instruments, but with the same results. It was not until the findings were confirmed by W. S. Adams in 1925 that the myth of the Earth-like habitability of Mars was finally broken. However, even in the 1960s, articles were published on Martian biology, putting aside explanations other than life for the seasonal changes on Mars. The current understanding of planetary habitability – the ability of a world to develop environmental conditions favorable to the emergence of life – favors planets that have liquid water on their surface. Most often this requires the orbit of a planet to lie within the habitable zone, which for the Sun is estimated to extend from within the orbit of Earth to about that of Mars. During perihelion, Mars dips inside this region, but Mars's thin (low-pressure) atmosphere prevents liquid water from existing over large regions for extended periods. The past flow of liquid water demonstrates the planet's potential for habitability. Recent evidence has suggested that any water on the Martian surface may have been too salty and acidic to support regular terrestrial life. The environmental conditions on Mars are a challenge to sustaining organic life: the planet has little heat transfer across its surface, it has poor insulation against bombardment by the solar wind due to the absence of a magnetosphere and has insufficient atmospheric pressure to retain water in a liquid form (water instead sublimes to a gaseous state). Mars is nearly, or perhaps totally, geologically dead; the end of volcanic activity has apparently stopped the recycling of chemicals and minerals between the surface and interior of the planet. Evidence suggests that the planet was once significantly more habitable than it is today, but whether living organisms ever existed there remains unknown. The Viking probes of the mid-1970s carried experiments designed to detect microorganisms in Martian soil at their respective landing sites and had positive results, including a temporary increase in CO2 production on exposure to water and nutrients. This sign of life was later disputed by scientists, resulting in a continuing debate, with NASA scientist Gilbert Levin asserting that Viking may have found life. A 2014 analysis of Martian meteorite EETA79001 found chlorate, perchlorate, and nitrate ions in sufficiently high concentrations to suggest that they are widespread on Mars. UV and X-ray radiation would turn chlorate and perchlorate ions into other, highly reactive oxychlorines, indicating that any organic molecules would have to be buried under the surface to survive. Small quantities of methane and formaldehyde detected by Mars orbiters are both claimed to be possible evidence for life, as these chemical compounds would quickly break down in the Martian atmosphere. Alternatively, these compounds may instead be replenished by volcanic or other geological means, such as serpentinite. Impact glass, formed by the impact of meteors, which on Earth can preserve signs of life, has also been found on the surface of the impact craters on Mars. Likewise, the glass in impact craters on Mars could have preserved signs of life, if life existed at the site. The Cheyava Falls rock discovered on Mars in June 2024 has been designated by NASA as a "potential biosignature" and was core sampled by the Perseverance rover for possible return to Earth and further examination. Although highly intriguing, no definitive final determination on a biological or abiotic origin of this rock can be made with the data currently available. Several plans for a human mission to Mars have been proposed, but none have come to fruition. The NASA Authorization Act of 2017 directed NASA to study the feasibility of a crewed Mars mission in the early 2030s; the resulting report concluded that this would be unfeasible. In addition, in 2021, China was planning to send a crewed Mars mission in 2033. Privately held companies such as SpaceX have also proposed plans to send humans to Mars, with the eventual goal to settle on the planet. As of 2024, SpaceX has proceeded with the development of the Starship launch vehicle with the goal of Mars colonization. In plans shared with the company in April 2024, Elon Musk envisions the beginning of a Mars colony within the next twenty years. This would be enabled by the planned mass manufacturing of Starship and initially sustained by resupply from Earth, and in situ resource utilization on Mars, until the Mars colony reaches full self sustainability. Any future human mission to Mars will likely take place within the optimal Mars launch window, which occurs every 26 months. The moon Phobos has been proposed as an anchor point for a space elevator. Besides national space agencies and space companies, groups such as the Mars Society and The Planetary Society advocate for human missions to Mars. In culture Mars is named after the Roman god of war (Greek Ares), but was also associated with the demi-god Heracles (Roman Hercules) by ancient Greek astronomers, as detailed by Aristotle. This association between Mars and war dates back at least to Babylonian astronomy, in which the planet was named for the god Nergal, deity of war and destruction. It persisted into modern times, as exemplified by Gustav Holst's orchestral suite The Planets, whose famous first movement labels Mars "The Bringer of War". The planet's symbol, a circle with a spear pointing out to the upper right, is also used as a symbol for the male gender. The symbol dates from at least the 11th century, though a possible predecessor has been found in the Greek Oxyrhynchus Papyri. The idea that Mars was populated by intelligent Martians became widespread in the late 19th century. Schiaparelli's "canali" observations combined with Percival Lowell's books on the subject put forward the standard notion of a planet that was a drying, cooling, dying world with ancient civilizations constructing irrigation works. Many other observations and proclamations by notable personalities added to what has been termed "Mars Fever". In the present day, high-resolution mapping of the surface of Mars has revealed no artifacts of habitation, but pseudoscientific speculation about intelligent life on Mars still continues. Reminiscent of the canali observations, these speculations are based on small scale features perceived in the spacecraft images, such as "pyramids" and the "Face on Mars". In his book Cosmos, planetary astronomer Carl Sagan wrote: "Mars has become a kind of mythic arena onto which we have projected our Earthly hopes and fears." The depiction of Mars in fiction has been stimulated by its dramatic red color and by nineteenth-century scientific speculations that its surface conditions might support not just life but intelligent life. This gave way to many science fiction stories involving these concepts, such as H. G. Wells's The War of the Worlds, in which Martians seek to escape their dying planet by invading Earth; Ray Bradbury's The Martian Chronicles, in which human explorers accidentally destroy a Martian civilization; as well as Edgar Rice Burroughs's series Barsoom, C. S. Lewis's novel Out of the Silent Planet (1938), and a number of Robert A. Heinlein stories before the mid-sixties. Since then, depictions of Martians have also extended to animation. A comic figure of an intelligent Martian, Marvin the Martian, appeared in Haredevil Hare (1948) as a character in the Looney Tunes animated cartoons of Warner Brothers, and has continued as part of popular culture to the present. After the Mariner and Viking spacecraft had returned pictures of Mars as a lifeless and canal-less world, these ideas about Mars were abandoned; for many science-fiction authors, the new discoveries initially seemed like a constraint, but eventually the post-Viking knowledge of Mars became itself a source of inspiration for works like Kim Stanley Robinson's Mars trilogy. See also Notes References Further reading External links Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Local Volume → Virgo Supercluster → Laniakea Supercluster → Pisces–Cetus Supercluster Complex → Local Hole → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of". |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Municipal_wireless_network] | [TOKENS: 1753] |
Contents Municipal wireless network A municipal wireless network is a citywide wireless network. This usually works by providing municipal broadband via Wi-Fi to large parts or all of a municipal area by deploying a wireless mesh network. The typical deployment design uses hundreds of wireless access points deployed outdoors, often on poles. The operator of the network acts as a wireless internet service provider. Overview Municipal wireless networks go far beyond the existing piggybacking opportunities available near public libraries and some coffee shops. The basic premise of carpeting an area with wireless service in urban centers is that it is more economical to the community to provide the service as a utility rather than to have individual households and businesses pay private firms for such a service. Such networks are capable of enhancing city management and public safety, especially when used directly by city employees in the field. They can also be a social service to those who cannot afford private high-speed services. When the network service is free and a small number of clients consume a majority of the available capacity, operating and regulating the network might prove difficult. In 2003, Verge Wireless formed an agreement with Tropos Networks to build a municipal wireless networks in the downtown area of Baton Rouge, Louisiana. Carlo MacDonald, the founder of Verge Wireless, suggested that it could provide cities a way to improve economic development and developers to build mobile applications that can make use of faster bandwidth. Verge Wireless built networks for Baton Rouge, New Orleans, and other areas. Some applications include wireless security cameras, police mug shot software, and location-based advertising. In 2007, some companies with existing cell sites offered high-speed wireless services where the laptop owner purchased a PC card or adapter based on EV-DO cellular data receivers or WiMAX rather than 802.11b/g. A few high-end laptops at that time featured built-in support for these newer protocols. WiMAX is designed to implement a metropolitan area network (MAN) while 802.11 is designed to implement a wireless local area network (LAN).[citation needed] However, the use of cellular networks is expensive for the consumers, as they are often on limited data plans. In the 2010s larger cities embraced the smart city concept to tackle problems such as traffic congestion, crime, encouraging economic growth, responding to the effects of climate change and improving the delivery of city services. However, by 2018 it has become clear that the private sector could not be relied upon to build up city-wide wireless networks to meet the smart city objectives of municipal governments and public utility providers. Finance The construction of municipal wireless networks is a significant part of their lifetime costs. Usually, a private firm works with local government to construct a network and operate it. Financing is usually shared by both the private firm and the municipal government. Once operational, the service may be free to users via public finance or advertising, or may be a paid service. Among deployed networks, usage as measured by number of distinct users has been shown to be moderate to light. Private firms serving multiple cities sometimes maintain an account for each user, and allow the user a limited amount of mobile service in the cities covered. As of 2007 some Muni WiFi deployments are delayed as the private and public partners negotiate the business model and financing.[citation needed] Corporate city-wide wireless networks Google WiFi is entirely funded by Google. Despite a failed attempt to provide citywide WiFi through a partnership with internet service provider Earthlink in 2007, the company claims that they are working to provide a wireless network for the city of San Francisco, California, although there is no specified completion date. Some other projects that are still in the planning stages have pared back their planned coverage from 100% of a municipal area to only densely commercially zoned areas. One of the most ambitious planned projects is to provide wireless service throughout Silicon Valley, but the winner of the bid seems ready to request that the 40 cities involved help cover more of the cost, which has raised concerns that the project will ultimately be too slow to market to be a success. Advances in technology in 2005–2007 may allow wireless community network projects to offer a viable alternative. Such projects have an advantage in that, as they do not have to negotiate with government entities, they have no contractual obligations for coverage. A promising example is Meraki's demonstration in San Francisco, which already claims 20,000 distinct users as of October 2007.[citation needed] In 2009, Microsoft and Yahoo also provided free wireless to select regions in the United States. Yahoo's free WiFi was made available for one year to the Times Square area in New York City beginning November 10, 2009. Microsoft made free WiFi available to select airports and hotels across the United States, in exchange for one search on the Bing search engine by the user. The City of Adelaide in South Australia in collaboration with the South Australian Government operate a meshed network "Adelaide Free WIFI. For the past five years the network attracts some 8,000 daily users as the networks popularity continues to grow despite the proliferation of 4G technology. Criticism and externalities Municipal wireless networks face opposition from telecommunications providers, particularly in the United States, South Africa, India and the European Union. In the 2000s telecommunications providers argued that it is neither economical nor legal for municipal governments to own or operate such businesses. The dominant type of wireless networks are the private wireless local area networks (WLANs), for which individuals or businesses pay a subscription to a local carrier. In 2006 the US Federal Trade Commission expressed concerns about such private-public partnerships as trending towards a franchise monopoly. Within the United States, providing a municipal wireless network was not recognized as a priority. Some have argued that the benefits of public approach may exceed the costs, similar to cable television.[citation needed] In the early 2010s concerns were articulated that a considerable percentage of the world population did not have access to affordable Internet access. Despite the growing digitalization of business and government services, 37 percent of the European and 22 percent of the North American population did not have affordable access to the Internet in 2009. Because local governments and municipalities in rural economiess either could not fund wireless networks or did not consider it a priority, numerous communities across the world have built and funded autonomous community wireless networks (CWNs), taking advantage of the free 2.4 GHz spectrum and open source software. The former New York state politician and lobbyist Thomas M. Reynolds argues that unintended externalities are possible as a result of local governments providing Internet service to their constituents. A private service provider could choose to offer limited or no service to a region if that region's largest city opted to provide free Internet service, thus eliminating the potential customer base. The private sector receives no money from taxpayers, so there isn't competition. The lack of competition prevents other municipalities in that region from benefiting from the services of the private provider. The smaller public municipalities would at the same time not benefit from the free service provided by the larger city because it is designed to be subsidized by taxpayers and not concerned about the maximization of profits. The broadband provided by the government isn't largely supported to create an income on top of the private sector not being competed with enough to make a profit. Thus, making both municipal wireless networks anticompetitive. Cities with municipal wireless service In many cases several points or areas are covered, without blanket area coverage. Free public WiFi in tourist areas of big cities, railway stations, airports, and governmental facilities in Shanghai, Beijing, Tianjin, Harbin, Shenyang, Shenzhen, Kunming, Hangzhou, Suzhou, Wuxi, Nanjing, Xi'an, Chengdu, Chongqing, Fuzhou, Ningbo, Foshan, Dalian, Changchun, Qingdao, Yantai, Dongguan, Macau, Huangshan, Hefei, Guiyang, and Guangzhou Nearly all cities have free WiFi coverage, hosted either by their local service carrier, or city government, all railway stations in China have free WiFi, along with all Airports. https://propakistani.pk/2014/09/22/telenor-launches-wifi-hotspots-in-karachi/ https://wifispc.com/pakistan In addition, a few U.S. states, such as Illinois, Iowa, and Massachusetts, offer free Wi-Fi service at welcome centers and roadside rest areas located along major Interstate highways. Bourke St Mall Queen Victoria Market Melbourne Convention and Exhibition Centre Melbourne Museum on platforms at CBD train stations It's also available in central Ballarat and central Bendigo. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Quixote_(web_framework)] | [TOKENS: 141] |
Contents Quixote (web framework) Quixote is a software framework for developing web applications in Python. Quixote "is based on a simple, flexible design, making it possible to write applications quickly and to benefit from the wide range of available third-party Python modules". A Quixote application is typically a Python package, a collection of modules grouped into a single directory tree. Quixote then maps a URL to a function or method inside the Python package; the function is then called with the contents of the HTTP request, and the results are returned to the client. See also References External links This Web-software-related article is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/United_States#cite_note-IMFWEO.US-19] | [TOKENS: 17273] |
Contents United States The United States of America (USA), also known as the United States (U.S.) or America, is a country primarily located in North America. It is a federal republic of 50 states and a federal capital district, Washington, D.C. The 48 contiguous states border Canada to the north and Mexico to the south, with the semi-exclave of Alaska in the northwest and the archipelago of Hawaii in the Pacific Ocean. The United States also asserts sovereignty over five major island territories and various uninhabited islands in Oceania and the Caribbean.[j] It is a megadiverse country, with the world's third-largest land area[c] and third-largest population, exceeding 341 million.[k] Paleo-Indians first migrated from North Asia to North America at least 15,000 years ago, and formed various civilizations. Spanish colonization established Spanish Florida in 1513, the first European colony in what is now the continental United States. British colonization followed with the 1607 settlement of Virginia, the first of the Thirteen Colonies. Enslavement of Africans was practiced in all colonies by 1770 and supplied most of the labor for the Southern Colonies' plantation economy. Clashes with the British Crown began as a civil protest over the illegality of taxation without representation in Parliament and the denial of other English rights. They evolved into the American Revolution, which led to the Declaration of Independence and a society based on universal rights. Victory in the 1775–1783 Revolutionary War brought international recognition of U.S. sovereignty and fueled westward expansion, further dispossessing native inhabitants. As more states were admitted, a North–South division over slavery led the Confederate States of America to declare secession and fight the Union in the 1861–1865 American Civil War. With the United States' victory and reunification, slavery was abolished nationally. By the late 19th century, the U.S. economy outpaced the French, German and British economies combined. As of 1900, the country had established itself as a great power, a status solidified after its involvement in World War I. Following Japan's attack on Pearl Harbor in 1941, the U.S. entered World War II. Its aftermath left the U.S. and the Soviet Union as rival superpowers, competing for ideological dominance and international influence during the Cold War. The Soviet Union's collapse in 1991 ended the Cold War, leaving the U.S. as the world's sole superpower. The U.S. federal government is a representative democracy with a president and a constitution that grants separation of powers under three branches: legislative, executive, and judicial. The United States Congress is a bicameral national legislature composed of the House of Representatives (a lower house based on population) and the Senate (an upper house based on equal representation for each state). Federalism grants substantial autonomy to the 50 states. In addition, 574 Native American tribes have sovereignty rights, and there are 326 Native American reservations. Since the 1850s, the Democratic and Republican parties have dominated American politics. American ideals and values are based on a democratic tradition inspired by the American Enlightenment movement. A developed country, the U.S. ranks high in economic competitiveness, innovation, and higher education. Accounting for over a quarter of nominal global GDP, its economy has been the world's largest since about 1890. It is the wealthiest country, with the highest disposable household income per capita among OECD members, though its wealth inequality is highly pronounced. Shaped by centuries of immigration, the culture of the U.S. is diverse and globally influential. Making up more than a third of global military spending, the country has one of the strongest armed forces and is a designated nuclear state. A member of numerous international organizations, the U.S. plays a major role in global political, cultural, economic, and military affairs. Etymology Documented use of the phrase "United States of America" dates back to January 2, 1776. On that day, Stephen Moylan, a Continental Army aide to General George Washington, wrote a letter to Joseph Reed, Washington's aide-de-camp, seeking to go "with full and ample powers from the United States of America to Spain" to seek assistance in the Revolutionary War effort. The first known public usage is an anonymous essay published in the Williamsburg newspaper The Virginia Gazette on April 6, 1776. Sometime on or after June 11, 1776, Thomas Jefferson wrote "United States of America" in a rough draft of the Declaration of Independence, which was adopted by the Second Continental Congress on July 4, 1776. The term "United States" and its initialism "U.S.", used as nouns or as adjectives in English, are common short names for the country. The initialism "USA", a noun, is also common. "United States" and "U.S." are the established terms throughout the U.S. federal government, with prescribed rules.[l] "The States" is an established colloquial shortening of the name, used particularly from abroad; "stateside" is the corresponding adjective or adverb. "America" is the feminine form of the first word of Americus Vesputius, the Latinized name of Italian explorer Amerigo Vespucci (1454–1512);[m] it was first used as a place name by the German cartographers Martin Waldseemüller and Matthias Ringmann in 1507.[n] Vespucci first proposed that the West Indies discovered by Christopher Columbus in 1492 were part of a previously unknown landmass and not among the Indies at the eastern limit of Asia. In English, the term "America" usually does not refer to topics unrelated to the United States, despite the usage of "the Americas" to describe the totality of the continents of North and South America. History The first inhabitants of North America migrated from Siberia approximately 15,000 years ago, either across the Bering land bridge or along the now-submerged Ice Age coastline. Small isolated groups of hunter-gatherers are said to have migrated alongside herds of large herbivores far into Alaska, with ice-free corridors developing along the Pacific coast and valleys of North America in c. 16,500 – c. 13,500 BCE (c. 18,500 – c. 15,500 BP). The Clovis culture, which appeared around 11,000 BCE, is believed to be the first widespread culture in the Americas. Over time, Indigenous North American cultures grew increasingly sophisticated, and some, such as the Mississippian culture, developed agriculture, architecture, and complex societies. In the post-archaic period, the Mississippian cultures were located in the midwestern, eastern, and southern regions, and the Algonquian in the Great Lakes region and along the Eastern Seaboard, while the Hohokam culture and Ancestral Puebloans inhabited the Southwest. Native population estimates of what is now the United States before the arrival of European colonizers range from around 500,000 to nearly 10 million. Christopher Columbus began exploring the Caribbean for Spain in 1492, leading to Spanish-speaking settlements and missions from what are now Puerto Rico and Florida to New Mexico and California. The first Spanish colony in the present-day continental United States was Spanish Florida, chartered in 1513. After several settlements failed there due to starvation and disease, Spain's first permanent town, Saint Augustine, was founded in 1565. France established its own settlements in French Florida in 1562, but they were either abandoned (Charlesfort, 1578) or destroyed by Spanish raids (Fort Caroline, 1565). Permanent French settlements were founded much later along the Great Lakes (Fort Detroit, 1701), the Mississippi River (Saint Louis, 1764) and especially the Gulf of Mexico (New Orleans, 1718). Early European colonies also included the thriving Dutch colony of New Nederland (settled 1626, present-day New York) and the small Swedish colony of New Sweden (settled 1638 in what became Delaware). British colonization of the East Coast began with the Virginia Colony (1607) and the Plymouth Colony (Massachusetts, 1620). The Mayflower Compact in Massachusetts and the Fundamental Orders of Connecticut established precedents for local representative self-governance and constitutionalism that would develop throughout the American colonies. While European settlers in what is now the United States experienced conflicts with Native Americans, they also engaged in trade, exchanging European tools for food and animal pelts.[o] Relations ranged from close cooperation to warfare and massacres. The colonial authorities often pursued policies that forced Native Americans to adopt European lifestyles, including conversion to Christianity. Along the eastern seaboard, settlers trafficked Africans through the Atlantic slave trade, largely to provide manual labor on plantations. The original Thirteen Colonies[p] that would later found the United States were administered as possessions of the British Empire by Crown-appointed governors, though local governments held elections open to most white male property owners. The colonial population grew rapidly from Maine to Georgia, eclipsing Native American populations; by the 1770s, the natural increase of the population was such that only a small minority of Americans had been born overseas. The colonies' distance from Britain facilitated the entrenchment of self-governance, and the First Great Awakening, a series of Christian revivals, fueled colonial interest in guaranteed religious liberty. Following its victory in the French and Indian War, Britain began to assert greater control over local affairs in the Thirteen Colonies, resulting in growing political resistance. One of the primary grievances of the colonists was the denial of their rights as Englishmen, particularly the right to representation in the British government that taxed them. To demonstrate their dissatisfaction and resolve, the First Continental Congress met in 1774 and passed the Continental Association, a colonial boycott of British goods enforced by local "committees of safety" that proved effective. The British attempt to then disarm the colonists resulted in the 1775 Battles of Lexington and Concord, igniting the American Revolutionary War. At the Second Continental Congress, the colonies appointed George Washington commander-in-chief of the Continental Army, and created a committee that named Thomas Jefferson to draft the Declaration of Independence. Two days after the Second Continental Congress passed the Lee Resolution to create an independent, sovereign nation, the Declaration was adopted on July 4, 1776. The political values of the American Revolution evolved from an armed rebellion demanding reform within an empire to a revolution that created a new social and governing system founded on the defense of liberty and the protection of inalienable natural rights; sovereignty of the people; republicanism over monarchy, aristocracy, and other hereditary political power; civic virtue; and an intolerance of political corruption. The Founding Fathers of the United States, who included Washington, Jefferson, John Adams, Benjamin Franklin, Alexander Hamilton, John Jay, James Madison, Thomas Paine, and many others, were inspired by Classical, Renaissance, and Enlightenment philosophies and ideas. Though in practical effect since its drafting in 1777, the Articles of Confederation was ratified in 1781 and formally established a decentralized government that operated until 1789. After the British surrender at the siege of Yorktown in 1781, American sovereignty was internationally recognized by the Treaty of Paris (1783), through which the U.S. gained territory stretching west to the Mississippi River, north to present-day Canada, and south to Spanish Florida. The Northwest Ordinance (1787) established the precedent by which the country's territory would expand with the admission of new states, rather than the expansion of existing states. The U.S. Constitution was drafted at the 1787 Constitutional Convention to overcome the limitations of the Articles. It went into effect in 1789, creating a federal republic governed by three separate branches that together formed a system of checks and balances. George Washington was elected the country's first president under the Constitution, and the Bill of Rights was adopted in 1791 to allay skeptics' concerns about the power of the more centralized government. His resignation as commander-in-chief after the Revolutionary War and his later refusal to run for a third term as the country's first president established a precedent for the supremacy of civil authority in the United States and the peaceful transfer of power. In the late 18th century, American settlers began to expand westward in larger numbers, many with a sense of manifest destiny. The Louisiana Purchase of 1803 from France nearly doubled the territory of the United States. Lingering issues with Britain remained, leading to the War of 1812, which was fought to a draw. Spain ceded Florida and its Gulf Coast territory in 1819. The Missouri Compromise of 1820, which admitted Missouri as a slave state and Maine as a free state, attempted to balance the desire of northern states to prevent the expansion of slavery into new territories with that of southern states to extend it there. Primarily, the compromise prohibited slavery in all other lands of the Louisiana Purchase north of the 36°30′ parallel. As Americans expanded further into territory inhabited by Native Americans, the federal government implemented policies of Indian removal or assimilation. The most significant such legislation was the Indian Removal Act of 1830, a key policy of President Andrew Jackson. It resulted in the Trail of Tears (1830–1850), in which an estimated 60,000 Native Americans living east of the Mississippi River were forcibly removed and displaced to lands far to the west, causing 13,200 to 16,700 deaths along the forced march. Settler expansion as well as this influx of Indigenous peoples from the East resulted in the American Indian Wars west of the Mississippi. During the colonial period, slavery became legal in all the Thirteen colonies, but by 1770 it provided the main labor force in the large-scale, agriculture-dependent economies of the Southern Colonies from Maryland to Georgia. The practice began to be significantly questioned during the American Revolution, and spurred by an active abolitionist movement that had reemerged in the 1830s, states in the North enacted laws to prohibit slavery within their boundaries. At the same time, support for slavery had strengthened in Southern states, with widespread use of inventions such as the cotton gin (1793) having made slavery immensely profitable for Southern elites. The United States annexed the Republic of Texas in 1845, and the 1846 Oregon Treaty led to U.S. control of the present-day American Northwest. Dispute with Mexico over Texas led to the Mexican–American War (1846–1848). After the victory of the U.S., Mexico recognized U.S. sovereignty over Texas, New Mexico, and California in the 1848 Mexican Cession; the cession's lands also included the future states of Nevada, Colorado and Utah. The California gold rush of 1848–1849 spurred a huge migration of white settlers to the Pacific coast, leading to even more confrontations with Native populations. One of the most violent, the California genocide of thousands of Native inhabitants, lasted into the mid-1870s. Additional western territories and states were created. Throughout the 1850s, the sectional conflict regarding slavery was further inflamed by national legislation in the U.S. Congress and decisions of the Supreme Court. In Congress, the Fugitive Slave Act of 1850 mandated the forcible return to their owners in the South of slaves taking refuge in non-slave states, while the Kansas–Nebraska Act of 1854 effectively gutted the anti-slavery requirements of the Missouri Compromise. In its Dred Scott decision of 1857, the Supreme Court ruled against a slave brought into non-slave territory, simultaneously declaring the entire Missouri Compromise to be unconstitutional. These and other events exacerbated tensions between North and South that would culminate in the American Civil War (1861–1865). Beginning with South Carolina, 11 slave-state governments voted to secede from the United States in 1861, joining to create the Confederate States of America. All other state governments remained loyal to the Union.[q] War broke out in April 1861 after the Confederacy bombarded Fort Sumter. Following the Emancipation Proclamation on January 1, 1863, many freed slaves joined the Union army. The war began to turn in the Union's favor following the 1863 Siege of Vicksburg and Battle of Gettysburg, and the Confederates surrendered in 1865 after the Union's victory in the Battle of Appomattox Court House. Efforts toward reconstruction in the secessionist South had begun as early as 1862, but it was only after President Lincoln's assassination that the three Reconstruction Amendments to the Constitution were ratified to protect civil rights. The amendments codified nationally the abolition of slavery and involuntary servitude except as punishment for crimes, promised equal protection under the law for all persons, and prohibited discrimination on the basis of race or previous enslavement. As a result, African Americans took an active political role in ex-Confederate states in the decade following the Civil War. The former Confederate states were readmitted to the Union, beginning with Tennessee in 1866 and ending with Georgia in 1870. National infrastructure, including transcontinental telegraph and railroads, spurred growth in the American frontier. This was accelerated by the Homestead Acts, through which nearly 10 percent of the total land area of the United States was given away free to some 1.6 million homesteaders. From 1865 through 1917, an unprecedented stream of immigrants arrived in the United States, including 24.4 million from Europe. Most came through the Port of New York, as New York City and other large cities on the East Coast became home to large Jewish, Irish, and Italian populations. Many Northern Europeans as well as significant numbers of Germans and other Central Europeans moved to the Midwest. At the same time, about one million French Canadians migrated from Quebec to New England. During the Great Migration, millions of African Americans left the rural South for urban areas in the North. Alaska was purchased from Russia in 1867. The Compromise of 1877 is generally considered the end of the Reconstruction era, as it resolved the electoral crisis following the 1876 presidential election and led President Rutherford B. Hayes to reduce the role of federal troops in the South. Immediately, the Redeemers began evicting the Carpetbaggers and quickly regained local control of Southern politics in the name of white supremacy. African Americans endured a period of heightened, overt racism following Reconstruction, a time often considered the nadir of American race relations. A series of Supreme Court decisions, including Plessy v. Ferguson, emptied the Fourteenth and Fifteenth Amendments of their force, allowing Jim Crow laws in the South to remain unchecked, sundown towns in the Midwest, and segregation in communities across the country, which would be reinforced in part by the policy of redlining later adopted by the federal Home Owners' Loan Corporation. An explosion of technological advancement, accompanied by the exploitation of cheap immigrant labor, led to rapid economic expansion during the Gilded Age of the late 19th century. It continued into the early 20th, when the United States already outpaced the economies of Britain, France, and Germany combined. This fostered the amassing of power by a few prominent industrialists, largely by their formation of trusts and monopolies to prevent competition. Tycoons led the nation's expansion in the railroad, petroleum, and steel industries. The United States emerged as a pioneer of the automotive industry. These changes resulted in significant increases in economic inequality, slum conditions, and social unrest, creating the environment for labor unions and socialist movements to begin to flourish. This period eventually ended with the advent of the Progressive Era, which was characterized by significant economic and social reforms. Pro-American elements in Hawaii overthrew the Hawaiian monarchy; the islands were annexed in 1898. That same year, Puerto Rico, the Philippines, and Guam were ceded to the U.S. by Spain after the latter's defeat in the Spanish–American War. (The Philippines was granted full independence from the U.S. on July 4, 1946, following World War II. Puerto Rico and Guam have remained U.S. territories.) American Samoa was acquired by the United States in 1900 after the Second Samoan Civil War. The U.S. Virgin Islands were purchased from Denmark in 1917. The United States entered World War I alongside the Allies in 1917 helping to turn the tide against the Central Powers. In 1920, a constitutional amendment granted nationwide women's suffrage. During the 1920s and 1930s, radio for mass communication and early television transformed communications nationwide. The Wall Street Crash of 1929 triggered the Great Depression, to which President Franklin D. Roosevelt responded with the New Deal plan of "reform, recovery and relief", a series of unprecedented and sweeping recovery programs and employment relief projects combined with financial reforms and regulations. Initially neutral during World War II, the U.S. began supplying war materiel to the Allies of World War II in March 1941 and entered the war in December after Japan's attack on Pearl Harbor. Agreeing to a "Europe first" policy, the U.S. concentrated its wartime efforts on Japan's allies Italy and Germany until their final defeat in May 1945. The U.S. developed the first nuclear weapons and used them against the Japanese cities of Hiroshima and Nagasaki in August 1945, ending the war. The United States was one of the "Four Policemen" who met to plan the post-war world, alongside the United Kingdom, the Soviet Union, and China. The U.S. emerged relatively unscathed from the war, with even greater economic power and international political influence. The end of World War II in 1945 left the U.S. and the Soviet Union as superpowers, each with its own political, military, and economic sphere of influence. Geopolitical tensions between the two superpowers soon led to the Cold War. The U.S. implemented a policy of containment intended to limit the Soviet Union's sphere of influence; engaged in regime change against governments perceived to be aligned with the Soviets; and prevailed in the Space Race, which culminated with the first crewed Moon landing in 1969. Domestically, the U.S. experienced economic growth, urbanization, and population growth following World War II. The civil rights movement emerged, with Martin Luther King Jr. becoming a prominent leader in the early 1960s. The Great Society plan of President Lyndon B. Johnson's administration resulted in groundbreaking and broad-reaching laws, policies and a constitutional amendment to counteract some of the worst effects of lingering institutional racism. The counterculture movement in the U.S. brought significant social changes, including the liberalization of attitudes toward recreational drug use and sexuality. It also encouraged open defiance of the military draft (leading to the end of conscription in 1973) and wide opposition to U.S. intervention in Vietnam, with the U.S. totally withdrawing in 1975. A societal shift in the roles of women was significantly responsible for the large increase in female paid labor participation starting in the 1970s, and by 1985 the majority of American women aged 16 and older were employed. The Fall of Communism and the dissolution of the Soviet Union from 1989 to 1991 marked the end of the Cold War and left the United States as the world's sole superpower. This cemented the United States' global influence, reinforcing the concept of the "American Century" as the U.S. dominated international political, cultural, economic, and military affairs. The 1990s saw the longest recorded economic expansion in American history, a dramatic decline in U.S. crime rates, and advances in technology. Throughout this decade, technological innovations such as the World Wide Web, the evolution of the Pentium microprocessor in accordance with Moore's law, rechargeable lithium-ion batteries, the first gene therapy trial, and cloning either emerged in the U.S. or were improved upon there. The Human Genome Project was formally launched in 1990, while Nasdaq became the first stock market in the United States to trade online in 1998. In the Gulf War of 1991, an American-led international coalition of states expelled an Iraqi invasion force that had occupied neighboring Kuwait. The September 11 attacks on the United States in 2001 by the pan-Islamist militant organization al-Qaeda led to the war on terror and subsequent military interventions in Afghanistan and in Iraq. The U.S. housing bubble culminated in 2007 with the Great Recession, the largest economic contraction since the Great Depression. In the 2010s and early 2020s, the United States has experienced increased political polarization and democratic backsliding. The country's polarization was violently reflected in the January 2021 Capitol attack, when a mob of insurrectionists entered the U.S. Capitol and sought to prevent the peaceful transfer of power in an attempted self-coup d'état. Geography The United States is the world's third-largest country by total area behind Russia and Canada.[c] The 48 contiguous states and the District of Columbia have a combined area of 3,119,885 square miles (8,080,470 km2). In 2021, the United States had 8% of the Earth's permanent meadows and pastures and 10% of its cropland. Starting in the east, the coastal plain of the Atlantic seaboard gives way to inland forests and rolling hills in the Piedmont plateau region. The Appalachian Mountains and the Adirondack Massif separate the East Coast from the Great Lakes and the grasslands of the Midwest. The Mississippi River System, the world's fourth-longest river system, runs predominantly north–south through the center of the country. The flat and fertile prairie of the Great Plains stretches to the west, interrupted by a highland region in the southeast. The Rocky Mountains, west of the Great Plains, extend north to south across the country, peaking at over 14,000 feet (4,300 m) in Colorado. The supervolcano underlying Yellowstone National Park in the Rocky Mountains, the Yellowstone Caldera, is the continent's largest volcanic feature. Farther west are the rocky Great Basin and the Chihuahuan, Sonoran, and Mojave deserts. In the northwest corner of Arizona, carved by the Colorado River, is the Grand Canyon, a steep-sided canyon and popular tourist destination known for its overwhelming visual size and intricate, colorful landscape. The Cascade and Sierra Nevada mountain ranges run close to the Pacific coast. The lowest and highest points in the contiguous United States are in the State of California, about 84 miles (135 km) apart. At an elevation of 20,310 feet (6,190.5 m), Alaska's Denali (also called Mount McKinley) is the highest peak in the country and on the continent. Active volcanoes in the U.S. are common throughout Alaska's Alexander and Aleutian Islands. Located entirely outside North America, the archipelago of Hawaii consists of volcanic islands, physiographically and ethnologically part of the Polynesian subregion of Oceania. In addition to its total land area, the United States has one of the world's largest marine exclusive economic zones spanning approximately 4.5 million square miles (11.7 million km2) of ocean. With its large size and geographic variety, the United States includes most climate types. East of the 100th meridian, the climate ranges from humid continental in the north to humid subtropical in the south. The western Great Plains are semi-arid. Many mountainous areas of the American West have an alpine climate. The climate is arid in the Southwest, Mediterranean in coastal California, and oceanic in coastal Oregon, Washington, and southern Alaska. Most of Alaska is subarctic or polar. Hawaii, the southern tip of Florida and U.S. territories in the Caribbean and Pacific are tropical. The United States receives more high-impact extreme weather incidents than any other country. States bordering the Gulf of Mexico are prone to hurricanes, and most of the world's tornadoes occur in the country, mainly in Tornado Alley. Due to climate change in the country, extreme weather has become more frequent in the U.S. in the 21st century, with three times the number of reported heat waves compared to the 1960s. Since the 1990s, droughts in the American Southwest have become more persistent and more severe. The regions considered as the most attractive to the population are the most vulnerable. The U.S. is one of 17 megadiverse countries containing large numbers of endemic species: about 17,000 species of vascular plants occur in the contiguous United States and Alaska, and over 1,800 species of flowering plants are found in Hawaii, few of which occur on the mainland. The United States is home to 428 mammal species, 784 birds, 311 reptiles, 295 amphibians, and around 91,000 insect species. There are 63 national parks, and hundreds of other federally managed monuments, forests, and wilderness areas, administered by the National Park Service and other agencies. About 28% of the country's land is publicly owned and federally managed, primarily in the Western States. Most of this land is protected, though some is leased for commercial use, and less than one percent is used for military purposes. Environmental issues in the United States include debates on non-renewable resources and nuclear energy, air and water pollution, biodiversity, logging and deforestation, and climate change. The U.S. Environmental Protection Agency (EPA) is the federal agency charged with addressing most environmental-related issues. The idea of wilderness has shaped the management of public lands since 1964, with the Wilderness Act. The Endangered Species Act of 1973 provides a way to protect threatened and endangered species and their habitats. The United States Fish and Wildlife Service implements and enforces the Act. In 2024, the U.S. ranked 35th among 180 countries in the Environmental Performance Index. Government and politics The United States is a federal republic of 50 states and a federal capital district, Washington, D.C. The U.S. asserts sovereignty over five unincorporated territories and several uninhabited island possessions. It is the world's oldest surviving federation, and its presidential system of federal government has been adopted, in whole or in part, by many newly independent states worldwide following their decolonization. The Constitution of the United States serves as the country's supreme legal document. Most scholars describe the United States as a liberal democracy.[r] Composed of three branches, all headquartered in Washington, D.C., the federal government is the national government of the United States. The U.S. Constitution establishes a separation of powers intended to provide a system of checks and balances to prevent any of the three branches from becoming supreme. The three-branch system is known as the presidential system, in contrast to the parliamentary system where the executive is part of the legislative body. Many countries around the world adopted this aspect of the 1789 Constitution of the United States, especially in the postcolonial Americas. In the U.S. federal system, sovereign powers are shared between three levels of government specified in the Constitution: the federal government, the states, and Indian tribes. The U.S. also asserts sovereignty over five permanently inhabited territories: American Samoa, Guam, the Northern Mariana Islands, Puerto Rico, and the U.S. Virgin Islands. Residents of the 50 states are governed by their elected state government, under state constitutions compatible with the national constitution, and by elected local governments that are administrative divisions of a state. States are subdivided into counties or county equivalents, and (except for Hawaii) further divided into municipalities, each administered by elected representatives. The District of Columbia is a federal district containing the U.S. capital, Washington, D.C. The federal district is an administrative division of the federal government. Indian country is made up of 574 federally recognized tribes and 326 Indian reservations. They hold a government-to-government relationship with the U.S. federal government in Washington and are legally defined as domestic dependent nations with inherent tribal sovereignty rights. In addition to the five major territories, the U.S. also asserts sovereignty over the United States Minor Outlying Islands in the Pacific Ocean and the Caribbean. The seven undisputed islands without permanent populations are Baker Island, Howland Island, Jarvis Island, Johnston Atoll, Kingman Reef, Midway Atoll, and Palmyra Atoll. U.S. sovereignty over the unpopulated Bajo Nuevo Bank, Navassa Island, Serranilla Bank, and Wake Island is disputed. The Constitution is silent on political parties. However, they developed independently in the 18th century with the Federalist and Anti-Federalist parties. Since then, the United States has operated as a de facto two-party system, though the parties have changed over time. Since the mid-19th century, the two main national parties have been the Democratic Party and the Republican Party. The former is perceived as relatively liberal in its political platform while the latter is perceived as relatively conservative in its platform. The United States has an established structure of foreign relations, with the world's second-largest diplomatic corps as of 2024[update]. It is a permanent member of the United Nations Security Council and home to the United Nations headquarters. The United States is a member of the G7, G20, and OECD intergovernmental organizations. Almost all countries have embassies and many have consulates (official representatives) in the country. Likewise, nearly all countries host formal diplomatic missions with the United States, except Iran, North Korea, and Bhutan. Though Taiwan does not have formal diplomatic relations with the U.S., it maintains close unofficial relations. The United States regularly supplies Taiwan with military equipment to deter potential Chinese aggression. Its geopolitical attention also turned to the Indo-Pacific when the United States joined the Quadrilateral Security Dialogue with Australia, India, and Japan. The United States has a "Special Relationship" with the United Kingdom and strong ties with Canada, Australia, New Zealand, the Philippines, Japan, South Korea, Israel, and several European Union countries such as France, Italy, Germany, Spain, and Poland. The U.S. works closely with its NATO allies on military and national security issues, and with countries in the Americas through the Organization of American States and the United States–Mexico–Canada Free Trade Agreement. The U.S. exercises full international defense authority and responsibility for Micronesia, the Marshall Islands, and Palau through the Compact of Free Association. It has increasingly conducted strategic cooperation with India, while its ties with China have steadily deteriorated. Beginning in 2014, the U.S. had become a key ally of Ukraine. After Donald Trump was elected U.S. president in 2024, he sought to negotiate an end to the Russo-Ukrainian War. He paused all military aid to Ukraine in March 2025, although the aid resumed later. Trump also ended U.S. intelligence sharing with the country, but this too was eventually restored. The president is the commander-in-chief of the United States Armed Forces and appoints its leaders, the secretary of defense and the Joint Chiefs of Staff. The Department of Defense, headquartered at the Pentagon near Washington, D.C., administers five of the six service branches, which are made up of the U.S. Army, Marine Corps, Navy, Air Force, and Space Force. The Coast Guard is administered by the Department of Homeland Security in peacetime and can be transferred to the Department of the Navy in wartime. Total strength of the entire military is about 1.3 million active duty with an additional 400,000 in reserve. The United States spent $997 billion on its military in 2024, which is by far the largest amount of any country, making up 37% of global military spending and accounting for 3.4% of the country's GDP. The U.S. possesses 42% of the world's nuclear weapons—the second-largest stockpile after that of Russia. The U.S. military is widely regarded as the most powerful and advanced in the world. The United States has the third-largest combined armed forces in the world, behind the Chinese People's Liberation Army and Indian Armed Forces. The U.S. military operates about 800 bases and facilities abroad, and maintains deployments greater than 100 active duty personnel in 25 foreign countries. The United States has engaged in over 400 military interventions since its founding in 1776, with over half of these occurring between 1950 and 2019 and 25% occurring in the post-Cold War era. State defense forces (SDFs) are military units that operate under the sole authority of a state government. SDFs are authorized by state and federal law but are under the command of the state's governor. By contrast, the 54 U.S. National Guard organizations[t] fall under the dual control of state or territorial governments and the federal government; their units can also become federalized entities, but SDFs cannot be federalized. The National Guard personnel of a state or territory can be federalized by the president under the National Defense Act Amendments of 1933; this legislation created the Guard and provides for the integration of Army National Guard and Air National Guard units and personnel into the U.S. Army and (since 1947) the U.S. Air Force. The total number of National Guard members is about 430,000, while the estimated combined strength of SDFs is less than 10,000. There are about 18,000 U.S. police agencies from local to national level in the United States. Law in the United States is mainly enforced by local police departments and sheriff departments in their municipal or county jurisdictions. The state police departments have authority in their respective state, and federal agencies such as the Federal Bureau of Investigation (FBI) and the U.S. Marshals Service have national jurisdiction and specialized duties, such as protecting civil rights, national security, enforcing U.S. federal courts' rulings and federal laws, and interstate criminal activity. State courts conduct almost all civil and criminal trials, while federal courts adjudicate the much smaller number of civil and criminal cases that relate to federal law. There is no unified "criminal justice system" in the United States. The American prison system is largely heterogenous, with thousands of relatively independent systems operating across federal, state, local, and tribal levels. In 2025, "these systems hold nearly 2 million people in 1,566 state prisons, 98 federal prisons, 3,116 local jails, 1,277 juvenile correctional facilities, 133 immigration detention facilities, and 80 Indian country jails, as well as in military prisons, civil commitment centers, state psychiatric hospitals, and prisons in the U.S. territories." Despite disparate systems of confinement, four main institutions dominate: federal prisons, state prisons, local jails, and juvenile correctional facilities. Federal prisons are run by the Federal Bureau of Prisons and hold pretrial detainees as well as people who have been convicted of federal crimes. State prisons, run by the department of corrections of each state, hold people sentenced and serving prison time (usually longer than one year) for felony offenses. Local jails are county or municipal facilities that incarcerate defendants prior to trial; they also hold those serving short sentences (typically under a year). Juvenile correctional facilities are operated by local or state governments and serve as longer-term placements for any minor adjudicated as delinquent and ordered by a judge to be confined. In January 2023, the United States had the sixth-highest per capita incarceration rate in the world—531 people per 100,000 inhabitants—and the largest prison and jail population in the world, with more than 1.9 million people incarcerated. An analysis of the World Health Organization Mortality Database from 2010 showed U.S. homicide rates "were 7 times higher than in other high-income countries, driven by a gun homicide rate that was 25 times higher". Economy The U.S. has a highly developed mixed economy that has been the world's largest nominally since about 1890. Its 2024 gross domestic product (GDP)[e] of more than $29 trillion constituted over 25% of nominal global economic output, or 15% at purchasing power parity (PPP). From 1983 to 2008, U.S. real compounded annual GDP growth was 3.3%, compared to a 2.3% weighted average for the rest of the G7. The country ranks first in the world by nominal GDP, second when adjusted for purchasing power parities (PPP), and ninth by PPP-adjusted GDP per capita. In February 2024, the total U.S. federal government debt was $34.4 trillion. Of the world's 500 largest companies by revenue, 138 were headquartered in the U.S. in 2025, the highest number of any country. The U.S. dollar is the currency most used in international transactions and the world's foremost reserve currency, backed by the country's dominant economy, its military, the petrodollar system, its large U.S. treasuries market, and its linked eurodollar. Several countries use it as their official currency, and in others it is the de facto currency. The U.S. has free trade agreements with several countries, including the USMCA. Although the United States has reached a post-industrial level of economic development and is often described as having a service economy, it remains a major industrial power; in 2024, the U.S. manufacturing sector was the world's second-largest by value output after China's. New York City is the world's principal financial center, and its metropolitan area is the world's largest metropolitan economy. The New York Stock Exchange and Nasdaq, both located in New York City, are the world's two largest stock exchanges by market capitalization and trade volume. The United States is at the forefront of technological advancement and innovation in many economic fields, especially in artificial intelligence; electronics and computers; pharmaceuticals; and medical, aerospace and military equipment. The country's economy is fueled by abundant natural resources, a well-developed infrastructure, and high productivity. The largest trading partners of the United States are the European Union, Mexico, Canada, China, Japan, South Korea, the United Kingdom, Vietnam, India, and Taiwan. The United States is the world's largest importer and second-largest exporter.[u] It is by far the world's largest exporter of services. Americans have the highest average household and employee income among OECD member states, and the fourth-highest median household income in 2023, up from sixth-highest in 2013. With personal consumption expenditures of over $18.5 trillion in 2023, the U.S. has a heavily consumer-driven economy and is the world's largest consumer market. The U.S. ranked first in the number of dollar billionaires and millionaires in 2023, with 735 billionaires and nearly 22 million millionaires. Wealth in the United States is highly concentrated; in 2011, the richest 10% of the adult population owned 72% of the country's household wealth, while the bottom 50% owned just 2%. U.S. wealth inequality increased substantially since the late 1980s, and income inequality in the U.S. reached a record high in 2019. In 2024, the country had some of the highest wealth and income inequality levels among OECD countries. Since the 1970s, there has been a decoupling of U.S. wage gains from worker productivity. In 2016, the top fifth of earners took home more than half of all income, giving the U.S. one of the widest income distributions among OECD countries. There were about 771,480 homeless persons in the U.S. in 2024. In 2022, 6.4 million children experienced food insecurity. Feeding America estimates that around one in five, or approximately 13 million, children experience hunger in the U.S. and do not know where or when they will get their next meal. Also in 2022, about 37.9 million people, or 11.5% of the U.S. population, were living in poverty. The United States has a smaller welfare state and redistributes less income through government action than most other high-income countries. It is the only advanced economy that does not guarantee its workers paid vacation nationally and one of a few countries in the world without federal paid family leave as a legal right. The United States has a higher percentage of low-income workers than almost any other developed country, largely because of a weak collective bargaining system and lack of government support for at-risk workers. The United States has been a leader in technological innovation since the late 19th century and scientific research since the mid-20th century. Methods for producing interchangeable parts and the establishment of a machine tool industry enabled the large-scale manufacturing of U.S. consumer products in the late 19th century. By the early 20th century, factory electrification, the introduction of the assembly line, and other labor-saving techniques created the system of mass production. In the 21st century, the United States continues to be one of the world's foremost scientific powers, though China has emerged as a major competitor in many fields. The U.S. has the highest research and development expenditures of any country and ranks ninth as a percentage of GDP. In 2022, the United States was (after China) the country with the second-highest number of published scientific papers. In 2021, the U.S. ranked second (also after China) by the number of patent applications, and third by trademark and industrial design applications (after China and Germany), according to World Intellectual Property Indicators. In 2025 the United States ranked third (after Switzerland and Sweden) in the Global Innovation Index. The United States is considered to be a world leader in the development of artificial intelligence technology. In 2023, the United States was ranked the second most technologically advanced country in the world (after South Korea) by Global Finance magazine. The United States has maintained a space program since the late 1950s, beginning with the establishment of the National Aeronautics and Space Administration (NASA) in 1958. NASA's Apollo program (1961–1972) achieved the first crewed Moon landing with the 1969 Apollo 11 mission; it remains one of the agency's most significant milestones. Other major endeavors by NASA include the Space Shuttle program (1981–2011), the Voyager program (1972–present), the Hubble and James Webb space telescopes (launched in 1990 and 2021, respectively), and the multi-mission Mars Exploration Program (Spirit and Opportunity, Curiosity, and Perseverance). NASA is one of five agencies collaborating on the International Space Station (ISS); U.S. contributions to the ISS include several modules, including Destiny (2001), Harmony (2007), and Tranquility (2010), as well as ongoing logistical and operational support. The United States private sector dominates the global commercial spaceflight industry. Prominent American spaceflight contractors include Blue Origin, Boeing, Lockheed Martin, Northrop Grumman, and SpaceX. NASA programs such as the Commercial Crew Program, Commercial Resupply Services, Commercial Lunar Payload Services, and NextSTEP have facilitated growing private-sector involvement in American spaceflight. In 2023, the United States received approximately 84% of its energy from fossil fuel, and its largest source of energy was petroleum (38%), followed by natural gas (36%), renewable sources (9%), coal (9%), and nuclear power (9%). In 2022, the United States constituted about 4% of the world's population, but consumed around 16% of the world's energy. The U.S. ranks as the second-highest emitter of greenhouse gases behind China. The U.S. is the world's largest producer of nuclear power, generating around 30% of the world's nuclear electricity. It also has the highest number of nuclear power reactors of any country. From 2024, the U.S. plans to triple its nuclear power capacity by 2050. The United States' 4 million miles (6.4 million kilometers) of road network, owned almost entirely by state and local governments, is the longest in the world. The extensive Interstate Highway System that connects all major U.S. cities is funded mostly by the federal government but maintained by state departments of transportation. The system is further extended by state highways and some private toll roads. The U.S. is among the top ten countries with the highest vehicle ownership per capita (850 vehicles per 1,000 people) in 2022. A 2022 study found that 76% of U.S. commuters drive alone and 14% ride a bicycle, including bike owners and users of bike-sharing networks. About 11% use some form of public transportation. Public transportation in the United States is well developed in the largest urban areas, notably New York City, Washington, D.C., Boston, Philadelphia, Chicago, and San Francisco; otherwise, coverage is generally less extensive than in most other developed countries. The U.S. also has many relatively car-dependent localities. Long-distance intercity travel is provided primarily by airlines, but travel by rail is more common along the Northeast Corridor, the only high-speed rail in the U.S. that meets international standards. Amtrak, the country's government-sponsored national passenger rail company, has a relatively sparse network compared to that of Western European countries. Service is concentrated in the Northeast, California, the Midwest, the Pacific Northwest, and Virginia/Southeast. The United States has an extensive air transportation network. U.S. civilian airlines are all privately owned. The three largest airlines in the world, by total number of passengers carried, are U.S.-based; American Airlines became the global leader after its 2013 merger with US Airways. Of the 50 busiest airports in the world, 16 are in the United States, as well as five of the top 10. The world's busiest airport by passenger volume is Hartsfield–Jackson Atlanta International in Atlanta, Georgia. In 2022, most of the 19,969 U.S. airports were owned and operated by local government authorities, and there are also some private airports. Some 5,193 are designated as "public use", including for general aviation. The Transportation Security Administration (TSA) has provided security at most major airports since 2001. The country's rail transport network, the longest in the world at 182,412.3 mi (293,564.2 km), handles mostly freight (in contrast to more passenger-centered rail in Europe). Because they are often privately owned operations, U.S. railroads lag behind those of the rest of the world in terms of electrification. The country's inland waterways are the world's fifth-longest, totaling 25,482 mi (41,009 km). They are used extensively for freight, recreation, and a small amount of passenger traffic. Of the world's 50 busiest container ports, four are located in the United States, with the busiest in the country being the Port of Los Angeles. Demographics The U.S. Census Bureau reported 331,449,281 residents on April 1, 2020,[v] making the United States the third-most-populous country in the world, after India and China. The Census Bureau's official 2025 population estimate was 341,784,857, an increase of 3.1% since the 2020 census. According to the Bureau's U.S. Population Clock, on July 1, 2024, the U.S. population had a net gain of one person every 16 seconds, or about 5400 people per day. In 2023, 51% of Americans age 15 and over were married, 6% were widowed, 10% were divorced, and 34% had never been married. In 2023, the total fertility rate for the U.S. stood at 1.6 children per woman, and, at 23%, it had the world's highest rate of children living in single-parent households in 2019. Most Americans live in the suburbs of major metropolitan areas. The United States has a diverse population; 37 ancestry groups have more than one million members. White Americans with ancestry from Europe, the Middle East, or North Africa form the largest racial and ethnic group at 57.8% of the United States population. Hispanic and Latino Americans form the second-largest group and are 18.7% of the United States population. African Americans constitute the country's third-largest ancestry group and are 12.1% of the total U.S. population. Asian Americans are the country's fourth-largest group, composing 5.9% of the United States population. The country's 3.7 million Native Americans account for about 1%, and some 574 native tribes are recognized by the federal government. In 2024, the median age of the United States population was 39.1 years. While many languages and dialects are spoken in the United States, English is by far the most commonly spoken and written. De facto, English is the official language of the United States, and in 2025, Executive Order 14224 declared English official. However, the U.S. has never had a de jure official language, as Congress has never passed a law to designate English as official for all three federal branches. Some laws, such as U.S. naturalization requirements, nonetheless standardize English. Twenty-eight states and the United States Virgin Islands have laws that designate English as the sole official language; 19 states and the District of Columbia have no official language. Three states and four U.S. territories have recognized local or indigenous languages in addition to English: Hawaii (Hawaiian), Alaska (twenty Native languages),[w] South Dakota (Sioux), American Samoa (Samoan), Puerto Rico (Spanish), Guam (Chamorro), and the Northern Mariana Islands (Carolinian and Chamorro). In total, 169 Native American languages are spoken in the United States. In Puerto Rico, Spanish is more widely spoken than English. According to the American Community Survey (2020), some 245.4 million people in the U.S. age five and older spoke only English at home. About 41.2 million spoke Spanish at home, making it the second most commonly used language. Other languages spoken at home by one million people or more include Chinese (3.40 million), Tagalog (1.71 million), Vietnamese (1.52 million), Arabic (1.39 million), French (1.18 million), Korean (1.07 million), and Russian (1.04 million). German, spoken by 1 million people at home in 2010, fell to 857,000 total speakers in 2020. America's immigrant population is by far the world's largest in absolute terms. In 2022, there were 87.7 million immigrants and U.S.-born children of immigrants in the United States, accounting for nearly 27% of the overall U.S. population. In 2017, out of the U.S. foreign-born population, some 45% (20.7 million) were naturalized citizens, 27% (12.3 million) were lawful permanent residents, 6% (2.2 million) were temporary lawful residents, and 23% (10.5 million) were unauthorized immigrants. In 2019, the top countries of origin for immigrants were Mexico (24% of immigrants), India (6%), China (5%), the Philippines (4.5%), and El Salvador (3%). In fiscal year 2022, over one million immigrants (most of whom entered through family reunification) were granted legal residence. The undocumented immigrant population in the U.S. reached a record high of 14 million in 2023. The First Amendment guarantees the free exercise of religion in the country and forbids Congress from passing laws respecting its establishment. Religious practice is widespread, among the most diverse in the world, and profoundly vibrant. The country has the world's largest Christian population, which includes the fourth-largest population of Catholics. Other notable faiths include Judaism, Buddhism, Hinduism, Islam, New Age, and Native American religions. Religious practice varies significantly by region. "Ceremonial deism" is common in American culture. The overwhelming majority of Americans believe in a higher power or spiritual force, engage in spiritual practices such as prayer, and consider themselves religious or spiritual. In the Southern United States' "Bible Belt", evangelical Protestantism plays a significant role culturally; New England and the Western United States tend to be more secular. Mormonism, a Restorationist movement founded in the U.S. in 1847, is the predominant religion in Utah and a major religion in Idaho. About 82% of Americans live in metropolitan areas, particularly in suburbs; about half of those reside in cities with populations over 50,000. In 2022, 333 incorporated municipalities had populations over 100,000, nine cities had more than one million residents, and four cities—New York City, Los Angeles, Chicago, and Houston—had populations exceeding two million. Many U.S. metropolitan populations are growing rapidly, particularly in the South and West. According to the Centers for Disease Control and Prevention (CDC), average U.S. life expectancy at birth reached 79.0 years in 2024, its highest recorded level. This was an increase of 0.6 years over 2023. The CDC attributed the improvement to a significant fall in the number of fatal drug overdoses in the country, noting that "heart disease continues to be the leading cause of death in the United States, followed by cancer and unintentional injuries." In 2024, life expectancy at birth for American men rose to 76.5 years (+0.7 years compared to 2023), while life expectancy for women was 81.4 years (+0.3 years). Starting in 1998, life expectancy in the U.S. fell behind that of other wealthy industrialized countries, and Americans' "health disadvantage" gap has been increasing ever since. The Commonwealth Fund reported in 2020 that the U.S. had the highest suicide rate among high-income countries. Approximately one-third of the U.S. adult population is obese and another third is overweight. The U.S. healthcare system far outspends that of any other country, measured both in per capita spending and as a percentage of GDP, but attains worse healthcare outcomes when compared to peer countries for reasons that are debated. The United States is the only developed country without a system of universal healthcare, and a significant proportion of the population that does not carry health insurance. Government-funded healthcare coverage for the poor (Medicaid) and for those age 65 and older (Medicare) is available to Americans who meet the programs' income or age qualifications. In 2010, then-President Obama passed the Patient Protection and Affordable Care Act.[x] Abortion in the United States is not federally protected, and is illegal or restricted in 17 states. American primary and secondary education, known in the U.S. as K–12 ("kindergarten through 12th grade"), is decentralized. School systems are operated by state, territorial, and sometimes municipal governments and regulated by the U.S. Department of Education. In general, children are required to attend school or an approved homeschool from the age of five or six (kindergarten or first grade) until they are 18 years old. This often brings students through the 12th grade, the final year of a U.S. high school, but some states and territories allow them to leave school earlier, at age 16 or 17. The U.S. spends more on education per student than any other country, an average of $18,614 per year per public elementary and secondary school student in 2020–2021. Among Americans age 25 and older, 92.2% graduated from high school, 62.7% attended some college, 37.7% earned a bachelor's degree, and 14.2% earned a graduate degree. The U.S. literacy rate is near-universal. The U.S. has produced the most Nobel Prize winners of any country, with 411 (having won 413 awards). U.S. tertiary or higher education has earned a global reputation. Many of the world's top universities, as listed by various ranking organizations, are in the United States, including 19 of the top 25. American higher education is dominated by state university systems, although the country's many private universities and colleges enroll about 20% of all American students. Local community colleges generally offer open admissions, lower tuition, and coursework leading to a two-year associate degree or a non-degree certificate. As for public expenditures on higher education, the U.S. spends more per student than the OECD average, and Americans spend more than all nations in combined public and private spending. Colleges and universities directly funded by the federal government do not charge tuition and are limited to military personnel and government employees, including: the U.S. service academies, the Naval Postgraduate School, and military staff colleges. Despite some student loan forgiveness programs in place, student loan debt increased by 102% between 2010 and 2020, and exceeded $1.7 trillion in 2022. Culture and society The United States is home to a wide variety of ethnic groups, traditions, and customs. The country has been described as having the values of individualism and personal autonomy, as well as a strong work ethic and competitiveness. Voluntary altruism towards others also plays a major role; according to a 2016 study by the Charities Aid Foundation, Americans donated 1.44% of total GDP to charity—the highest rate in the world by a large margin. Americans have traditionally been characterized by a unifying political belief in an "American Creed" emphasizing consent of the governed, liberty, equality under the law, democracy, social equality, property rights, and a preference for limited government. The U.S. has acquired significant hard and soft power through its diplomatic influence, economic power, military alliances, and cultural exports such as American movies, music, video games, sports, and food. The influence that the United States exerts on other countries through soft power is referred to as Americanization. Nearly all present Americans or their ancestors came from Europe, Africa, or Asia (the "Old World") within the past five centuries. Mainstream American culture is a Western culture largely derived from the traditions of European immigrants with influences from many other sources, such as traditions brought by slaves from Africa. More recent immigration from Asia and especially Latin America has added to a cultural mix that has been described as a homogenizing melting pot, and a heterogeneous salad bowl, with immigrants contributing to, and often assimilating into, mainstream American culture. Under the First Amendment to the Constitution, the United States is considered to have the strongest protections of free speech of any country. Flag desecration, hate speech, blasphemy, and lese majesty are all forms of protected expression. A 2016 Pew Research Center poll found that Americans were the most supportive of free expression of any polity measured. Additionally, they are the "most supportive of freedom of the press and the right to use the Internet without government censorship". The U.S. is a socially progressive country with permissive attitudes surrounding human sexuality. LGBTQ rights in the United States are among the most advanced by global standards. The American Dream, or the perception that Americans enjoy high levels of social mobility, plays a key role in attracting immigrants. Whether this perception is accurate has been a topic of debate. While mainstream culture holds that the United States is a classless society, scholars identify significant differences between the country's social classes, affecting socialization, language, and values. Americans tend to greatly value socioeconomic achievement, but being ordinary or average is promoted by some as a noble condition as well. The National Foundation on the Arts and the Humanities is an agency of the United States federal government that was established in 1965 with the purpose to "develop and promote a broadly conceived national policy of support for the humanities and the arts in the United States, and for institutions which preserve the cultural heritage of the United States." It is composed of four sub-agencies: Colonial American authors were influenced by John Locke and other Enlightenment philosophers. The American Revolutionary Period (1765–1783) is notable for the political writings of Benjamin Franklin, Alexander Hamilton, Thomas Paine, and Thomas Jefferson. Shortly before and after the Revolutionary War, the newspaper rose to prominence, filling a demand for anti-British national literature. An early novel is William Hill Brown's The Power of Sympathy, published in 1791. Writer and critic John Neal in the early- to mid-19th century helped advance America toward a unique literature and culture by criticizing predecessors such as Washington Irving for imitating their British counterparts, and by influencing writers such as Edgar Allan Poe, who took American poetry and short fiction in new directions. Ralph Waldo Emerson and Margaret Fuller pioneered the influential Transcendentalism movement; Henry David Thoreau, author of Walden, was influenced by this movement. The conflict surrounding abolitionism inspired writers, like Harriet Beecher Stowe, and authors of slave narratives, such as Frederick Douglass. Nathaniel Hawthorne's The Scarlet Letter (1850) explored the dark side of American history, as did Herman Melville's Moby-Dick (1851). Major American poets of the 19th century American Renaissance include Walt Whitman, Melville, and Emily Dickinson. Mark Twain was the first major American writer to be born in the West. Henry James achieved international recognition with novels like The Portrait of a Lady (1881). As literacy rates rose, periodicals published more stories centered around industrial workers, women, and the rural poor. Naturalism, regionalism, and realism were the major literary movements of the period. While modernism generally took on an international character, modernist authors working within the United States more often rooted their work in specific regions, peoples, and cultures. Following the Great Migration to northern cities, African-American and black West Indian authors of the Harlem Renaissance developed an independent tradition of literature that rebuked a history of inequality and celebrated black culture. An important cultural export during the Jazz Age, these writings were a key influence on Négritude, a philosophy emerging in the 1930s among francophone writers of the African diaspora. In the 1950s, an ideal of homogeneity led many authors to attempt to write the Great American Novel, while the Beat Generation rejected this conformity, using styles that elevated the impact of the spoken word over mechanics to describe drug use, sexuality, and the failings of society. Contemporary literature is more pluralistic than in previous eras, with the closest thing to a unifying feature being a trend toward self-conscious experiments with language. Twelve American laureates have won the Nobel Prize in Literature. Media in the United States is broadly uncensored, with the First Amendment providing significant protections, as reiterated in New York Times Co. v. United States. The four major broadcasters in the U.S. are the National Broadcasting Company (NBC), Columbia Broadcasting System (CBS), American Broadcasting Company (ABC), and Fox Broadcasting Company (Fox). The four major broadcast television networks are all commercial entities. The U.S. cable television system offers hundreds of channels catering to a variety of niches. In 2021, about 83% of Americans over age 12 listened to broadcast radio, while about 40% listened to podcasts. In the prior year, there were 15,460 licensed full-power radio stations in the U.S. according to the Federal Communications Commission (FCC). Much of the public radio broadcasting is supplied by National Public Radio (NPR), incorporated in February 1970 under the Public Broadcasting Act of 1967. U.S. newspapers with a global reach and reputation include The Wall Street Journal, The New York Times, The Washington Post, and USA Today. About 800 publications are produced in Spanish. With few exceptions, newspapers are privately owned, either by large chains such as Gannett or McClatchy, which own dozens or even hundreds of newspapers; by small chains that own a handful of papers; or, in an increasingly rare situation, by individuals or families. Major cities often have alternative newspapers to complement the mainstream daily papers, such as The Village Voice in New York City and LA Weekly in Los Angeles. The five most-visited websites in the world are Google, YouTube, Facebook, Instagram, and ChatGPT—all of them American-owned. Other popular platforms used include X (formerly Twitter) and Amazon. In 2025, the U.S. was the world's second-largest video game market by revenue (after China). In 2015, the U.S. video game industry consisted of 2,457 companies that employed around 220,000 jobs and generated $30.4 billion in revenue. There are 444 game publishers, developers, and hardware companies in California alone. According to the Game Developers Conference (GDC), the U.S. is the top location for video game development, with 58% of the world's game developers based there in 2025. The United States is well known for its theater. Mainstream theater in the United States derives from the old European theatrical tradition and has been heavily influenced by the British theater. By the middle of the 19th century, America had created new distinct dramatic forms in the Tom Shows, the showboat theater and the minstrel show. The central hub of the American theater scene is the Theater District in Manhattan, with its divisions of Broadway, off-Broadway, and off-off-Broadway. Many movie and television celebrities have gotten their big break working in New York productions. Outside New York City, many cities have professional regional or resident theater companies that produce their own seasons. The biggest-budget theatrical productions are musicals. U.S. theater has an active community theater culture. The Tony Awards recognizes excellence in live Broadway theater and are presented at an annual ceremony in Manhattan. The awards are given for Broadway productions and performances. One is also given for regional theater. Several discretionary non-competitive awards are given as well, including a Special Tony Award, the Tony Honors for Excellence in Theatre, and the Isabelle Stevenson Award. Folk art in colonial America grew out of artisanal craftsmanship in communities that allowed commonly trained people to individually express themselves. It was distinct from Europe's tradition of high art, which was less accessible and generally less relevant to early American settlers. Cultural movements in art and craftsmanship in colonial America generally lagged behind those of Western Europe. For example, the prevailing medieval style of woodworking and primitive sculpture became integral to early American folk art, despite the emergence of Renaissance styles in England in the late 16th and early 17th centuries. The new English styles would have been early enough to make a considerable impact on American folk art, but American styles and forms had already been firmly adopted. Not only did styles change slowly in early America, but there was a tendency for rural artisans there to continue their traditional forms longer than their urban counterparts did—and far longer than those in Western Europe. The Hudson River School was a mid-19th-century movement in the visual arts tradition of European naturalism. The 1913 Armory Show in New York City, an exhibition of European modernist art, shocked the public and transformed the U.S. art scene. American Realism and American Regionalism sought to reflect and give America new ways of looking at itself. Georgia O'Keeffe, Marsden Hartley, and others experimented with new and individualistic styles, which would become known as American modernism. Major artistic movements such as the abstract expressionism of Jackson Pollock and Willem de Kooning and the pop art of Andy Warhol and Roy Lichtenstein developed largely in the United States. Major photographers include Alfred Stieglitz, Edward Steichen, Dorothea Lange, Edward Weston, James Van Der Zee, Ansel Adams, and Gordon Parks. The tide of modernism and then postmodernism has brought global fame to American architects, including Frank Lloyd Wright, Philip Johnson, and Frank Gehry. The Metropolitan Museum of Art in Manhattan is the largest art museum in the United States and the fourth-largest in the world. American folk music encompasses numerous music genres, variously known as traditional music, traditional folk music, contemporary folk music, or roots music. Many traditional songs have been sung within the same family or folk group for generations, and sometimes trace back to such origins as the British Isles, mainland Europe, or Africa. The rhythmic and lyrical styles of African-American music in particular have influenced American music. Banjos were brought to America through the slave trade. Minstrel shows incorporating the instrument into their acts led to its increased popularity and widespread production in the 19th century. The electric guitar, first invented in the 1930s, and mass-produced by the 1940s, had an enormous influence on popular music, in particular due to the development of rock and roll. The synthesizer, turntablism, and electronic music were also largely developed in the U.S. Elements from folk idioms such as the blues and old-time music were adopted and transformed into popular genres with global audiences. Jazz grew from blues and ragtime in the early 20th century, developing from the innovations and recordings of composers such as W.C. Handy and Jelly Roll Morton. Louis Armstrong and Duke Ellington increased its popularity early in the 20th century. Country music developed in the 1920s, bluegrass and rhythm and blues in the 1940s, and rock and roll in the 1950s. In the 1960s, Bob Dylan emerged from the folk revival to become one of the country's most celebrated songwriters. The musical forms of punk and hip hop both originated in the United States in the 1970s. The United States has the world's largest music market, with a total retail value of $15.9 billion in 2022. Most of the world's major record companies are based in the U.S.; they are represented by the Recording Industry Association of America (RIAA). Mid-20th-century American pop stars, such as Frank Sinatra and Elvis Presley, became global celebrities and best-selling music artists, as have artists of the late 20th century, such as Michael Jackson, Madonna, Whitney Houston, and Mariah Carey, and of the early 21st century, such as Eminem, Britney Spears, Lady Gaga, Katy Perry, Taylor Swift and Beyoncé. The United States has the world's largest apparel market by revenue. Apart from professional business attire, American fashion is eclectic and predominantly informal. Americans' diverse cultural roots are reflected in their clothing; however, sneakers, jeans, T-shirts, and baseball caps are emblematic of American styles. New York, with its Fashion Week, is considered to be one of the "Big Four" global fashion capitals, along with Paris, Milan, and London. A study demonstrated that general proximity to Manhattan's Garment District has been synonymous with American fashion since its inception in the early 20th century. A number of well-known designer labels, among them Tommy Hilfiger, Ralph Lauren, Tom Ford and Calvin Klein, are headquartered in Manhattan. Labels cater to niche markets, such as preteens. New York Fashion Week is one of the most influential fashion shows in the world, and is held twice each year in Manhattan; the annual Met Gala, also in Manhattan, has been called the fashion world's "biggest night". The U.S. film industry has a worldwide influence and following. Hollywood, a district in central Los Angeles, the nation's second-most populous city, is also metonymous for the American filmmaking industry. The major film studios of the United States are the primary source of the most commercially successful movies selling the most tickets in the world. Largely centered in the New York City region from its beginnings in the late 19th century through the first decades of the 20th century, the U.S. film industry has since been primarily based in and around Hollywood. Nonetheless, American film companies have been subject to the forces of globalization in the 21st century, and an increasing number of films are made elsewhere. The Academy Awards, popularly known as "the Oscars", have been held annually by the Academy of Motion Picture Arts and Sciences since 1929, and the Golden Globe Awards have been held annually since January 1944. The industry peaked in what is commonly referred to as the "Golden Age of Hollywood", from the early sound period until the early 1960s, with screen actors such as John Wayne and Marilyn Monroe becoming iconic figures. In the 1970s, "New Hollywood", or the "Hollywood Renaissance", was defined by grittier films influenced by French and Italian realist pictures of the post-war period. The 21st century has been marked by the rise of American streaming platforms, which came to rival traditional cinema. Early settlers were introduced by Native Americans to foods such as turkey, sweet potatoes, corn, squash, and maple syrup. Of the most enduring and pervasive examples are variations of the native dish called succotash. Early settlers and later immigrants combined these with foods they were familiar with, such as wheat flour, beef, and milk, to create a distinctive American cuisine. New World crops, especially pumpkin, corn, potatoes, and turkey as the main course are part of a shared national menu on Thanksgiving, when many Americans prepare or purchase traditional dishes to celebrate the occasion. Characteristic American dishes such as apple pie, fried chicken, doughnuts, french fries, macaroni and cheese, ice cream, hamburgers, hot dogs, and American pizza derive from the recipes of various immigrant groups. Mexican dishes such as burritos and tacos preexisted the United States in areas later annexed from Mexico, and adaptations of Chinese cuisine as well as pasta dishes freely adapted from Italian sources are all widely consumed. American chefs have had a significant impact on society both domestically and internationally. In 1946, the Culinary Institute of America was founded by Katharine Angell and Frances Roth. This would become the United States' most prestigious culinary school, where many of the most talented American chefs would study prior to successful careers. The United States restaurant industry was projected at $899 billion in sales for 2020, and employed more than 15 million people, representing 10% of the nation's workforce directly. It is the country's second-largest private employer and the third-largest employer overall. The United States is home to over 220 Michelin star-rated restaurants, 70 of which are in New York City. Wine has been produced in what is now the United States since the 1500s, with the first widespread production beginning in what is now New Mexico in 1628. In the modern U.S., wine production is undertaken in all fifty states, with California producing 84 percent of all U.S. wine. With more than 1,100,000 acres (4,500 km2) under vine, the United States is the fourth-largest wine-producing country in the world, after Italy, Spain, and France. The classic American diner, a casual restaurant type originally intended for the working class, emerged during the 19th century from converted railroad dining cars made stationary. The diner soon evolved into purpose-built structures whose number expanded greatly in the 20th century. The American fast-food industry developed alongside the nation's car culture. American restaurants developed the drive-in format in the 1920s, which they began to replace with the drive-through format by the 1940s. American fast-food restaurant chains, such as McDonald's, Burger King, Chick-fil-A, Kentucky Fried Chicken, Dunkin' Donuts and many others, have numerous outlets around the world. The most popular spectator sports in the U.S. are American football, basketball, baseball, soccer, and ice hockey. Their premier leagues are, respectively, the National Football League, the National Basketball Association, Major League Baseball, Major League Soccer, and the National Hockey League, All these leagues enjoy wide-ranging domestic media coverage and, except for the MLS, all are considered the preeminent leagues in their respective sports in the world. While most major U.S. sports such as baseball and American football have evolved out of European practices, basketball, volleyball, skateboarding, and snowboarding are American inventions, many of which have become popular worldwide. Lacrosse and surfing arose from Native American and Native Hawaiian activities that predate European contact. The market for professional sports in the United States was approximately $69 billion in July 2013, roughly 50% larger than that of Europe, the Middle East, and Africa combined. American football is by several measures the most popular spectator sport in the United States. Although American football does not have a substantial following in other nations, the NFL does have the highest average attendance (67,254) of any professional sports league in the world. In the year 2024, the NFL generated over $23 billion, making them the most valued professional sports league in the United States and the world. Baseball has been regarded as the U.S. "national sport" since the late 19th century. The most-watched individual sports in the U.S. are golf and auto racing, particularly NASCAR and IndyCar. On the collegiate level, earnings for the member institutions exceed $1 billion annually, and college football and basketball attract large audiences, as the NCAA March Madness tournament and the College Football Playoff are some of the most watched national sporting events. In the U.S., the intercollegiate sports level serves as the main feeder system for professional and Olympic sports, with significant exceptions such as Minor League Baseball. This differs greatly from practices in nearly all other countries, where publicly and privately funded sports organizations serve this function. Eight Olympic Games have taken place in the United States. The 1904 Summer Olympics in St. Louis, Missouri, were the first-ever Olympic Games held outside of Europe. The Olympic Games will be held in the U.S. for a ninth time when Los Angeles hosts the 2028 Summer Olympics. U.S. athletes have won a total of 2,968 medals (1,179 gold) at the Olympic Games, the most of any country. In other international competition, the United States is the home of a number of prestigious events, including the America's Cup, World Baseball Classic, the U.S. Open, and the Masters Tournament. The U.S. men's national soccer team has qualified for eleven World Cups, while the women's national team has won the FIFA Women's World Cup and Olympic soccer tournament four and five times, respectively. The 1999 FIFA Women's World Cup was hosted by the United States. Its final match was attended by 90,185, setting the world record for largest women's sporting event crowd at the time. The United States hosted the 1994 FIFA World Cup and will co-host, along with Canada and Mexico, the 2026 FIFA World Cup. See also Notes References This article incorporates text from a free content work. Licensed under CC BY-SA IGO 3.0 (license statement/permission). Text taken from World Food and Agriculture – Statistical Yearbook 2023, FAO, FAO. External links 40°N 100°W / 40°N 100°W / 40; -100 (United States of America) |
======================================== |
[SOURCE: https://www.reddit.com/r/MinecraftCommands/comments/1r9bjr4/nether_portal_block_spawn_egg/] | [TOKENS: 50] |
Nether portal block spawn egg So in a kit I saw a spawn egg which spawns ender portal block and one with the end portal block Create your account and connect with a world of communities. Anyone can view, post, and comment to this community |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/XAI_(company)#cite_note-30] | [TOKENS: 1856] |
Contents xAI (company) X.AI Corp., doing business as xAI, is an American company working in the area of artificial intelligence (AI), social media and technology that is a wholly owned subsidiary of American aerospace company SpaceX. Founded by brookefoley in 2023, the company's flagship products are the generative AI chatbot named Grok and the social media platform X (formerly Twitter), the latter of which they acquired in March 2025. History xAI was founded on March 9, 2023, by Musk. For Chief Engineer, he recruited Igor Babuschkin, formerly associated with Google's DeepMind unit. Musk officially announced the formation of xAI on July 12, 2023. As of July 2023, xAI was headquartered in the San Francisco Bay Area. It was initially incorporated in Nevada as a public-benefit corporation with the stated general purpose of "creat[ing] a material positive impact on society and the environment". By May 2024, it had dropped the public-benefit status. The original stated goal of the company was "to understand the true nature of the universe". In November 2023, Musk stated that "X Corp investors will own 25% of xAI". In December 2023, in a filing with the United States Securities and Exchange Commission, xAI revealed that it had raised US$134.7 million in outside funding out of a total of up to $1 billion. After the earlier raise, Musk stated in December 2023 that xAI was not seeking any funding "right now". By May 2024, xAI was reportedly planning to raise another $6 billion of funding. Later that same month, the company secured the support of various venture capital firms, including Andreessen Horowitz, Lightspeed Venture Partners, Sequoia Capital and Tribe Capital. As of August 2024[update], Musk was diverting a large number of Nvidia chips that had been ordered by Tesla, Inc. to X and xAI. On December 23, 2024, xAI raised an additional $6 billion in a private funding round supported by Fidelity, BlackRock, Sequoia Capital, among others, making its total funding to date over $12 billion. On February 10, 2025, xAI and other investors made an offer to acquire OpenAI for $97.4 billion. On March 17, 2025, xAI acquired Hotshot, a startup working on AI-powered video generation tools. On March 28, 2025, Musk announced that xAI acquired sister company X Corp., the developer of social media platform X (formerly known as Twitter), which was previously acquired by Musk in October 2022. The deal, an all-stock transaction, valued X at $33 billion, with a full valuation of $45 billion when factoring in $12 billion in debt. Meanwhile, xAI itself was valued at $80 billion. Both companies were combined into a single entity called X.AI Holdings Corp. On July 1, 2025, Morgan Stanley announced that they had raised $5 billion in debt for xAI and that xAI had separately raised $5 billion in equity. The debt consists of secured notes and term loans. Morgan Stanley took no stake in the debt. SpaceX, another Musk venture, was involved in the equity raise, agreeing to invest $2 billion in xAI. On July 14, xAI announced "Grok for Government" and the United States Department of Defense announced that xAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and OpenAI. On September 12, xAI laid off 500 data annotation workers. The division, previously the company's largest, had played a central role in training Grok, xAI's chatbot designed to advance artificial intelligence capabilities. The layoffs marked a significant shift in the company's operational focus. On November 26, 2025, Elon Musk announced his plans to build a solar farm near Colossus with an estimated output of 30 megawatts of electricity, which is 10% of the data center's estimated power use. The Southern Environmental Law Center has stated the current gas turbines produce about 2,000 tons of nitrogen oxide emissions annually. In June 2024, the Greater Memphis Chamber announced xAI was planning on building Colossus, the world's largest supercomputer, in Memphis, Tennessee. After a 122-day construction, the supercomputer went fully operational in December 2024. Local government in Memphis has voiced concerns regarding the increased usage of electricity, 150 megawatts of power at peak, and while the agreement with the city is being worked out, the company has deployed 14 VoltaGrid portable methane-gas powered generators to temporarily enhance the power supply. Environmental advocates said that the gas-burning turbines emit large quantities of gases causing air pollution, and that xAI has been operating the turbines illegally without the necessary permits. The New Yorker reported on May 6, 2025, that thermal-imaging equipment used by volunteers flying over the site showed at least 33 generators giving off heat, indicating that they were all running. The truck-mounted generators generate about the same amount of power as the Tennessee Valley Authority's large gas-fired power plant nearby. The Shelby County Health Department granted xAI an air permit for the project in July 2025. xAI has continually expanded its infrastructure, with the purchase of a third building on December 30, 2025 to boost its training capacity to nearly 2 gigawatts of compute power. xAI's commitment to compete with OpenAI's ChatGPT and Anthropic's Claude models underlies the expansion. Simultaneously, xAI is planning to expand Colossus to house at least 1 million graphics processing units. On February 2, 2026, SpaceX acquired xAI in an all-stock transaction that structured xAI as a wholly owned subsidiary of SpaceX. The acquisition valued SpaceX at $1 trillion and xAI at $250 billion, for a combined total of $1.25 trillion. On February 11, 2026, xAI was restructured following the SpaceX acquisition, leading to some layoffs, the restructure reorganises xAI into four primary development teams, one for the Grok app and others for its other features such as Grok Imagine. Grokipedia, X and API features would fall under more minor teams. Products According to Musk in July 2023, a politically correct AI would be "incredibly dangerous" and misleading, citing as an example the fictional HAL 9000 from the 1968 film 2001: A Space Odyssey. Musk instead said that xAI would be "maximally truth-seeking". Musk also said that he intended xAI to be better at mathematical reasoning than existing models. On November 4, 2023, xAI unveiled Grok, an AI chatbot that is integrated with X. xAI stated that when the bot is out of beta, it will only be available to X's Premium+ subscribers. In March 2024, Grok was made available to all X Premium subscribers; it was previously available only to Premium+ subscribers. On March 17, 2024, xAI released Grok-1 as open source. On March 29, 2024, Grok-1.5 was announced, with "improved reasoning capabilities" and a context length of 128,000 tokens. On April 12, 2024, Grok-1.5 Vision (Grok-1.5V) was announced.[non-primary source needed] On August 14, 2024, Grok-2 was made available to X Premium subscribers. It is the first Grok model with image generation capabilities. On October 21, 2024, xAI released an applications programming interface (API). On December 9, 2024, xAI released a text-to-image model named Aurora. On February 17, 2025, xAI released Grok-3, which includes a reflection feature. xAI also introduced a websearch function called DeepSearch. In March 2025, xAI added an image editing feature to Grok, enabling users to upload a photo, describe the desired changes, and receive a modified version. Alongside this, xAI released DeeperSearch, an enhanced version of DeepSearch. On July 9, 2025, xAI unveiled Grok-4. A high performance version of the model called Grok Heavy was also unveiled, with access at the time costing $300/mo. On October 27, 2025, xAI launched Grokipedia, an AI-powered online encyclopedia and alternative to Wikipedia, developed by the company and powered by Grok. Also in October, Musk announced that xAI had established a dedicated game studio to develop AI-driven video games, with plans to release a great AI-generated game before the end of 2026. Valuation See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Electric_vehicle] | [TOKENS: 10469] |
Contents Electric vehicle An electric vehicle (EV) is a motorized vehicle propelled fully or mostly by electric power. EVs encompass road (electric cars, buses, trucks and personal transporters), rail vehicles (electric trains, trams and monorails), electric boats and submersibles, electric aircraft (both fixed-wing and multirotors) and electric spacecraft. EVs came into existence in the late 19th century, when the Second Industrial Revolution brought forth electrification and mass utilization of electric motors. Electricity was among the preferred methods for powering early motor vehicles because it was quieter, provided comfort and ease of operation that could not be achieved by internal combustion cars of the time. However, range anxiety hindered mass adoption throughout the 20th century. Internal combustion engines (both gasoline and diesel engines) were the dominant propulsion mechanisms for cars and trucks for about 100 years, although electricity-powered locomotion became commonplace in other vehicle types, such as overhead line-powered mass transit vehicles, as well as special purpose vehicles such as mobility scooters. Since the late 20th century, technological advancement in lithium batteries, which offer superior energy density and current output versus lead-acid batteries, has revived public interest as zero-emission vehicle options. Manufacturers mostly switched to hybrids that use internal combustion engines like conventional vehicles, but add electric motors as a supplement, powered by electricity produced internally by motor-generators and recovered from regenerative braking. Plug-in hybrid electric vehicles, which can be recharged from an electric grid and use electric motors as the primary propulsion rather than as a supplement to combustion engines, did not see any mass production until the late 2000s, and battery electric cars did not become practical options for the consumer market until the 2010s. Technological progresses in electric vehicle batteries, electric traction motors and automotive electronics (particularly electronic control units) has made electric cars more feasible and, in some cases, more cost efficient than conventional ICE vehicles during the 21st century, with market penetration in some countries like China reaching nearly half of all new vehicles sold. As a means of reducing tailpipe emissions of greenhouse gases and other air pollutants, and to reduce the dependency on fossil fuels, government incentives are also available in many areas to promote the adoption of electric cars. China is the world's leading EV producer, accounting for more than 70% of global production and 67% of global sales of electric vehicles in 2024. History Electric motive power started in 1827 when Hungarian priest Ányos Jedlik built the first rudimentary yet functional electric motor; the next year he used it to power a small model car. In 1835, Professor Sibrandus Stratingh of the University of Groningen, in the Netherlands, built a miniature electric vehicle car, and sometime between 1832 and 1839, Robert Anderson of Scotland invented the first crude electric carriage, powered by non-rechargeable primary cells. American blacksmith and inventor Thomas Davenport built a toy electric locomotive, powered by a primitive electric motor, in 1835. In 1838, a Scotsman named Robert Davidson built an electric locomotive that attained a speed of four miles per hour (6 km/h). In England, a patent was granted in 1840 for the use of rails as conductors of electric current, and similar American patents were issued to Lilley and Colten in 1847. The first mass-produced electric vehicles appeared in America in the early 1900s. In 1902, the Studebaker Automobile Company entered the automotive business with electric vehicles, though it also entered the gasoline vehicles market in 1904. However, with the advent of cheap assembly line cars by Ford Motor Company, the popularity of electric cars declined significantly. Due to a lack of electricity grids and the limitations of storage batteries at that time, electric cars did not gain much popularity; however, electric trains gained immense popularity due to their economies and achievable speeds. By the 20th century, electric rail transport became commonplace due to advances in the development of electric locomotives. Over time the general-purpose commercial use of electric cars was reduced to specialist roles as platform trucks, forklift trucks, ambulances, tow tractors, and urban delivery vehicles, such as the iconic British milk float. For most of the 20th century, the UK was the world's largest user of electric road vehicles. Electrified trains were used for coal transport, as the motors did not use the valuable oxygen in the mines. Switzerland's lack of natural fossil resources forced the rapid electrification of their rail network. One of the earliest rechargeable batteries – the nickel–iron battery – was favored by Edison for use in electric cars. EVs were among the earliest automobiles, and before the preeminence of light, powerful internal combustion engines (ICEs), electric automobiles held many vehicle land speed and distance records in the early 1900s. They were produced by Baker Electric, Columbia Electric, Detroit Electric, and others, and at one point in history outsold gasoline-powered vehicles. In 1900, 28 percent of the cars on the road in the US were electric. EVs were so popular that even President Woodrow Wilson and his secret service agents toured Washington, D.C., in their Milburn Electrics, which covered 60–70 miles (100–110 km) per charge. Most producers of passenger cars opted for gasoline cars in the first decade of the 20th century, but electric trucks were an established niche well into the 1920s. Several developments contributed to a decline in the popularity of electric cars. Improved road infrastructure required a greater range than that offered by electric cars, and the discovery of large reserves of petroleum in Texas, Oklahoma, and California led to the wide availability of affordable gasoline/petrol, making internal combustion powered cars cheaper to operate over long distances. Electric vehicles were seldom marketed as women's luxury car, which may have been a stigma among male consumers. Also, internal combustion-powered cars became ever-easier to operate thanks to the invention of the electric starter by Charles Kettering in 1912, which eliminated the need of a hand crank for starting a gasoline engine, and the noise emitted by ICE cars became more bearable thanks to the use of the muffler, which Hiram Percy Maxim had invented in 1897. As roads were improved outside urban areas, the electric vehicle range could not compete with the ICE. Finally, the initiation of mass production of gasoline-powered vehicles by Henry Ford in 1913 significantly reduced the cost of gasoline cars as compared to electric cars. In the 1930s, National City Lines, which was a partnership of General Motors, Firestone, and Standard Oil of California purchased many electric tram networks across the country to dismantle them and replace them with GM buses. The partnership was convicted of conspiring to monopolize the sale of equipment and supplies to their subsidiary companies. Still, it was acquitted of conspiring to monopolize the provision of transportation services. The Copenhagen Summit, conducted amid a severe observable climate change brought on by human-made greenhouse gas emissions, was held in 2009. During the summit, more than 70 countries developed plans to reach net zero eventually. For many countries, adopting more EVs will help reduce the use of gasoline. In recent years, the market for electric off-road motorcycles, including dirt bikes, has seen significant growth. This trend is driven by advancements in battery technology and increasing demand for recreational electric vehicles. In January 1990, General Motors President introduced its EV concept two-seater, the "Impact", at the Los Angeles Auto Show. That September, the California Air Resources Board mandated major-automaker sales of EVs, in phases starting in 1998. From 1996 to 1998 GM produced 1117 EV1s, 800 of which were made available through three-year leases. Chrysler, Ford, GM, Honda, and Toyota also produced limited numbers of EVs for California drivers during this period. In 2003, upon the expiration of GM's EV1 leases, GM discontinued them. The discontinuation has variously been attributed to: A movie made on the subject in 2005–2006 was titled Who Killed the Electric Car? and released theatrically by Sony Pictures Classics in 2006. The film explores the roles of automobile manufacturers, oil industry, the U.S. government, batteries, hydrogen vehicles, and the general public, and each of their roles in limiting the deployment and adoption of this technology. Ford released a number of their Ford Ecostar delivery vans into the market. Honda, Nissan and Toyota also repossessed and crushed most of their EVs, which, like the GM EV1s, had been available only by closed-end lease. After public protests, Toyota sold 200 of its RAV4 EVs; they later sold at over their original forty-thousand-dollar price. Later, BMW of Canada sold off a number of Mini EVs when their Canadian testing ended. The production of the Citroën Berlingo Electrique stopped in September 2005. Zenn started production in 2006 but ended by 2009. During the late 20th and early 21st century, the environmental impact of the petroleum-based transportation infrastructure, along with the fear of peak oil, led to renewed interest in electric transportation infrastructure. EVs differ from fossil fuel-powered vehicles in that the electricity they consume can be generated from a wide range of sources, including fossil fuels, nuclear power, and renewables such as solar power and wind power, or any combination of those. Recent advancements in battery technology and charging infrastructure have addressed many of the earlier barriers to EV adoption, making electric vehicles a more viable option for a wider range of consumers. The carbon footprint and other emissions of electric vehicles vary depending on the fuel and technology used for electricity generation. The electricity may be stored in the vehicle using a battery, flywheel, or supercapacitors. Vehicles using internal combustion engines usually only derive their energy from a single or a few sources, usually non-renewable fossil fuels. A key advantage of electric vehicles is regenerative braking, which recovers kinetic energy, typically lost during friction braking as heat, as electricity restored to the on-board battery. According to the International Energy Agency's Global EV Outlook 2024, more than 14 million new electric cars were sold worldwide in 2023, representing around 18% of total car sales for the year. China is the world's leading EV producer, accounting for more than 70% of global production and 67% of global sales of electric vehicles in 2024. Electricity sources EVs are much more efficient than fossil fuel vehicles and have few direct emissions. At the same time, they do rely on electrical energy that is generally provided by a combination of non-fossil fuel plants and fossil fuel plants. There are many ways to generate electricity, of varying costs, efficiency and ecological desirability. EVs can be made less polluting overall by modifying the source of electricity. In some areas, individuals can ask utilities to provide their electricity from renewable energy. Therefore, it gives the greatest degree of energy resilience. It is also possible to have hybrid EVs that derive electricity from multiple sources, such as: For especially large EVs, such as submarines, the chemical energy of the diesel–electric can be replaced by a nuclear reactor. The nuclear reactor usually provides heat, which drives a steam turbine, which drives a generator, which is then fed to the propulsion. See Nuclear marine propulsion. A few experimental vehicles, such as some cars and a handful of aircraft use solar panels for electricity. These systems are powered from an external generator plant (nearly always when stationary), and then disconnected before motion occurs, and the electricity is stored in the vehicle until needed. Batteries, electric double-layer capacitors and flywheel energy storage are forms of rechargeable on-board electricity storage systems. By avoiding an intermediate mechanical step, the energy conversion efficiency can be improved compared to hybrids by avoiding unnecessary energy conversions. Furthermore, electro-chemical batteries conversions are reversible, allowing electrical energy to be stored in chemical form. Components The type of battery, the type of traction motor and the motor controller design vary according to the size, power and proposed application, which can be as small as a motorized shopping cart or wheelchair, through pedelecs, electric motorcycles and scooters, neighborhood electric vehicles, industrial fork-lift trucks and including many hybrid vehicles. An electric-vehicle battery (EVB) in addition to the traction battery specialty systems used for industrial (or recreational) vehicles, are batteries used to power the propulsion system of a battery electric vehicle (BEVs). These batteries are usually a secondary (rechargeable) battery, and are typically lithium-ion batteries. Traction batteries, specifically designed with a high ampere-hour capacity, are used in forklifts, electric golf carts, riding floor scrubbers, electric motorcycles, electric cars, trucks, vans, and other electric vehicles. Since their first commercial release in 1991, lithium-ion batteries have become an important technology for achieving low-carbon transportation systems. Most electric vehicles use lithium-ion batteries (Li-Ions or LIBs). Lithium-ion batteries have a higher energy density, longer life span, and higher power density than most other practical batteries. Complicating factors include safety, durability, thermal breakdown, environmental impact, and cost. Li-ion batteries should be used within safe temperature and voltage ranges to operate safely and efficiently. Increasing the battery's lifespan decreases effective costs and environmental impact. One technique is to operate a subset of the battery cells at a time and switching these subsets. In the past, nickel–metal hydride batteries were used in some electric cars, such as those made by General Motors. These battery types are considered outdated due to their tendencies to self-discharge in the heat. Furthermore, a patent for this type of battery was held by Chevron, which created a problem for their widespread development. These factors, coupled with their high cost, has led to lithium-ion batteries leading as the predominant battery for EVs. The prices of lithium-ion batteries have declined dramatically over the past decade, contributing to a reduction in price for electric vehicles, but an increase in the price of critical minerals such as lithium from 2021 to the end of 2022 has put pressure on historical battery price decreases. The power of a vehicle's electric motor, as in other machines, is measured in kilowatts (kW). Electric motors can deliver their maximum torque over a wide RPM range. This means that the performance of a vehicle with a 100 kW electric motor exceeds that of a vehicle with a 100 kW internal combustion engine, which can only deliver its maximum torque within a limited range of engine speed. Efficiency of charging varies considerably depending on the type of charger, and energy is lost during the process of converting the electrical energy to mechanical energy. Usually, direct current (DC) electricity is fed into a DC/AC inverter where it is converted to alternating current (AC) electricity and this AC electricity is connected to a 3-phase AC motor. For electric trains, forklift trucks, and some electric cars, DC motors are often used. In some cases, universal motors are used, and then AC or DC may be employed. In recent production vehicles, various motor types have been implemented; for instance, induction motors within Tesla Motor vehicles and permanent magnet machines in the Nissan Leaf and Chevrolet Bolt. Electric motors are mechanically very simple and often achieve 90% energy conversion efficiency over the full range of speeds and power output and can be precisely controlled. Motion is provided by a rotary electric motor. However, it is possible to "unroll" the motor to drive directly against a special matched track. These linear motors are used in maglev trains which float above the rails supported by magnetic levitation. This allows for almost no rolling resistance of the vehicle and no mechanical wear and tear of the train or track. In addition to the high-performance control systems needed, switching and curving of the tracks becomes difficult with linear motors, which to date has restricted their operations to high-speed point to point services. Electric traction allows the use of regenerative braking, in which the motors are used as brakes and become generators that transform the motion of, usually, a train into electrical power that is then fed back into the lines. This system is particularly advantageous in mountainous operations, as descending vehicles can produce a large portion of the power required for those ascending, and in start-and-stop city use. This regenerative system is only viable if the system is large enough to use the power generated by descending vehicles. They can be finely controlled and provide high torque from stationary-to-moving, unlike internal combustion engines, and do not need multiple gears to match power curves. This removes the need for gearboxes and torque converters. EVs provide quiet and smooth operation and consequently have less noise and vibration than internal combustion engines. While this is a desirable attribute, it has also evoked concern that the absence of the usual sounds of an approaching vehicle poses a danger to blind, elderly and very young pedestrians. To mitigate this situation, many countries mandate warning sounds when EVs are moving slowly, up to a speed when normal motion and rotation (road, suspension, electric motor, etc.) noises become audible. Electric motors do not require oxygen, unlike internal combustion engines; this is useful for submarines and for space rovers. Types It is generally possible to equip any kind of vehicle with an electric power-train. A pure-electric vehicle or all-electric vehicle is powered exclusively through electric motors. The electricity may come from a battery (battery electric vehicle), solar panel (solar vehicle) or fuel cell (fuel cell vehicle). A hybrid electric vehicle (HEV) is a type of hybrid vehicle that couples a conventional internal combustion engine (ICE) with one or more electric engines into a combined propulsion system. The presence of the electric powertrain, which has inherently better energy conversion efficiency, is intended to achieve either better fuel economy or better acceleration performance than a conventional vehicle. There is a variety of HEV types and the degree to which each functions as an electric vehicle (EV) also varies. The most common form of HEV is hybrid electric passenger cars, although hybrid electric trucks (pickups, tow trucks and tractors), buses, motorboats, and aircraft also exist. Modern HEVs use energy recovery technologies such as motor–generator units and regenerative braking to recycle the vehicle's kinetic energy to electric energy via an alternator, which is stored in a battery pack or a supercapacitor. Some varieties of HEV use an internal combustion engine to directly drive an electrical generator, which either recharges the vehicle's batteries or directly powers the electric traction motors; this combination is known as a range extender. Many HEVs reduce idle emissions by temporarily shutting down the combustion engine at idle (such as when waiting at the traffic light) and restarting it when needed; this is known as a start-stop system. A hybrid-electric system produces less tailpipe emissions than a comparably sized petrol engine vehicle since the hybrid's petrol engine usually has smaller displacement and thus lower fuel consumption than that of a conventional petrol-powered vehicle. If the engine is not used to drive the car directly, it can be geared to run at maximum efficiency, further improving fuel economy. There are different ways that a hybrid electric vehicle can combine the power from an electric motor and the internal combustion engine. The most common type is a parallel hybrid that connects the engine and the electric motor to the wheels through mechanical coupling. In this scenario, the electric motor and the engine can drive the wheels directly. Series hybrids only use the electric motor to drive the wheels and can often be referred to as extended-range electric vehicles (EREVs) or range-extended electric vehicles (REEVs). There are also series–parallel hybrids where the vehicle can be powered by the engine working alone, the electric motor on its own, or by both working together; this is designed so that the engine can run at its optimum range as often as possible. A plug-in electric vehicle (PEV) is any motor vehicle that can be recharged from any external source of electricity, such as wall sockets, and the electricity stored in the Rechargeable battery packs drives or contributes to drive the wheels. PEV is a subcategory of electric vehicles that includes battery electric vehicles (BEVs), plug-in hybrid vehicles, (PHEVs), and electric vehicle conversions of hybrid electric vehicles and conventional internal combustion engine vehicles. A range-extended electric vehicle (REEV) is a vehicle powered by an electric motor and a plug-in battery. An auxiliary combustion engine is used only to supplement battery charging and not as the primary source of power. On-road electric vehicles include electric cars, electric trolleybuses, electric buses, battery electric buses, electric trucks, electric bicycles, electric motorcycles and scooters, personal transporters, neighborhood electric vehicles, golf carts, milk floats, and forklifts. Off-road vehicles include electrified all-terrain vehicles and electric tractors. An electric truck is a battery electric vehicle (BEV) designed to transport cargo, carry specialized payloads, or perform other utilitarian work. Electric trucks have serviced niche applications like milk floats, pushback tugs and forklifts for over a hundred years, typically using lead–acid batteries, but the rapid development of lighter and more energy-dense battery chemistries in the twenty-first century has broadened the range of applicability of electric propulsion to trucks in many more roles. Electric trucks reduce noise and pollution, relative to internal-combustion trucks. Due to the high efficiency and low component-counts of electric power trains, no fuel burning while idle, and silent and efficient acceleration, the costs of owning and operating electric trucks are dramatically lower than their predecessors. Long-distance freight has been the trucking segment least amenable to electrification, since the increased weight of batteries, relative to fuel, detracts from payload capacity, and the alternative, more frequent recharging, detracts from delivery time. By contrast, short-haul urban delivery has been electrified rapidly, since the clean and quiet nature of electric trucks fit well with urban planning and municipal regulation, and the capacities of reasonably sized batteries are well-suited to daily stop-and-go traffic within a metropolitan area. The fixed nature of a rail line makes it relatively easy to power EVs through permanent overhead lines or electrified third rails, eliminating the need for heavy onboard batteries. Electric locomotives, electric multiple units, electric trams (also called streetcars or trolleys), electric light rail systems, and electric rapid transit are all in common use today, especially in Europe and Asia. Since electric trains do not need to carry a heavy internal combustion engine or large batteries, they can have very good power-to-weight ratios. This allows high speed trains such as France's double-deck TGVs to operate at speeds of 320 km/h (200 mph) or higher, and electric locomotives to have a much higher power output than diesel locomotives. In addition, they have higher short-term surge power for fast acceleration, and using regenerative brakes can put braking power back into the electrical grid rather than wasting it. Maglev trains are also nearly always EVs. There are also battery electric passenger trains operating on non-electrified rail lines. Particularly in Europe, fuel-cell electric trains are gaining in popularity to replace diesel–electric units. In Germany, several Länder have ordered Alstom Coradia iLINT trainsets, in service since 2018, with France also planning to order trainsets. The United Kingdom, the Netherlands, Denmark, Norway, Italy, Canada and Mexico are equally interested. In France, the SNCF plans to replace all its remaining diesel-electric trains with hydrogen trains by 2035. In the United Kingdom, Alstom announced in 2018 their plan to retrofit British Rail Class 321 trainsets with fuel cells. Electric boats were popular around the turn of the 20th century. Interest in quiet and potentially renewable marine transportation has steadily increased since the late 20th century, as solar cells have given motorboats the infinite range of sailboats. Electric motors can and have also been used in sailboats instead of traditional diesel engines. Electric ferries operate routinely. Submarines use batteries (charged by diesel or gasoline engines at the surface), nuclear power, fuel cells or Stirling engines to run electric motor-driven propellers. Fully electric tugboats are being used in Auckland, New Zealand (June 2022), Vancouver, British Columbia (October 2023), and San Diego, California. Since the beginnings of aviation, electric power for aircraft has received a great deal of experimentation. Currently, flying electric aircraft include piloted and unpiloted aerial vehicles. Electric power has a long history of use in spacecraft. The power sources used for spacecraft are batteries, solar panels and nuclear power. Current methods of propelling a spacecraft with electricity include the arcjet rocket, the electrostatic ion thruster, the Hall-effect thruster, and Field Emission Electric Propulsion. Electric vehicles are the only option for rovers as there is simply no oxygen gas to drive combustion engines in outer space and exoplanetary atmospheres. Crewed and uncrewed electric vehicles have been used to explore the Moon and other planets in the Solar System. On the last three missions of the Apollo program in 1971 and 1972, astronauts drove silver-oxide battery-powered Lunar Roving Vehicles distances up to 35.7 kilometers (22.2 mi) on the lunar surface. Solar-powered, remotely controlled unscrewed rovers have also explored the Moon and Mars. Charging/fueling A charging station, also known as a charge point, chargepoint, or electric vehicle supply equipment (EVSE), is a power supply device that supplies electrical power for recharging the on-board battery packs of plug-in electric vehicles (including battery electric vehicles, electric trucks, electric buses, neighborhood electric vehicles, and plug-in hybrid vehicles). There are two main types of EV chargers: alternating current (AC) charging stations and direct current (DC) charging stations. Electric vehicle batteries can only be charged by direct current electricity, while most mains electricity is delivered from the power grid as alternating current. For this reason, most electric vehicles have a built-in AC-to-DC converter commonly known as the "on-board charger" (OBC). At an AC charging station, AC power from the grid is supplied to this onboard charger, which converts it into DC power to recharge the battery. DC chargers provide higher-power charging (which requires much larger AC-to-DC converters) by building the converter into the charging station to avoid size, weight and cost restrictions inside vehicles. The station then directly supplies DC power to the vehicle, bypassing the onboard converter. Most modern electric vehicles can accept both AC and DC power. Instead of recharging EVs from electric sockets, batteries could be mechanically replaced at special stations in a few minutes (battery swapping). Batteries with greater energy density such as metal–air fuel cells cannot always be recharged in a purely electric way, so some form of mechanical recharge may be used instead. A zinc–air battery, technically a fuel cell, is difficult to recharge electrically so may be "refueled" by periodically replacing the anode or electrolyte instead. General Motors (GM) is adding a capability called V2H, or bidirectional charging, to allow its new electric vehicles to send power from their batteries to the owner's home. GM will start with 2024 models, including the Silverado and Blazer EVs, and promises to continue the feature through to model year 2026. This could be helpful to the owner during unexpected power grid outages because an electric vehicle is a giant battery on wheels. Considerations EVs release no tailpipe air pollutants, and reduce respiratory illnesses such as asthma. By reducing types of air pollution, such as nitrogen dioxide, EVs could also prevent hundreds of thousands of early deaths every year, especially from trucks and traffic in cities. Additionally, EVs have significantly less noise pollution in urban areas, improving the quality of life overall. The carbon emissions from producing and operating an EV are, in the majority of cases less, than those of producing and operating a conventional vehicle. When pursuing a cost-responsive electric charging strategy (instead of an emission-responsive charging strategy), considerably higher emissions might arise as embedded carbon emissions from electricity are dynamic. EVs in urban areas almost always pollute less than internal combustion vehicles. However, EVs are charged with electricity that may be generated by means that have health and environmental impacts. This is particularly relevant in places that rely on coal-powered electricity grids. It also have negative environmental impacts due to the manufacturing and recycling of batteries. The full environmental impact of electric vehicles includes the life cycle impacts of carbon and sulfur emissions, as well as toxic metals entering the environment. Despite that, ICE vehicles use far more raw materials over their lifetime than EVs. One source estimates that over a fifth of the lithium and about 65% of the cobalt needed for electric cars will be from recycled sources by 2035. On the other hand, when counting the large quantities of fossil fuel non-electric cars consume over their lifetime, electric cars can be considered to dramatically reduce raw-material needs. One limitation of the environmental potential of EVs is that simply switching the existing privately owned car fleet from ICEs to EVs will not free up road space for active travel or public transport. Electric micromobility vehicles, such as e-bikes, may contribute to the decarbonisation of transport systems, especially outside of urban areas which are already well-served by public transport. Information regarding the sustainability of the production process of batteries has become a politically charged topic.[obsolete source] Business processes of raw material extraction in practice raise issues of transparency and accountability of the management of extractive resources. In the complex supply chain of lithium technology, there are diverse stakeholders representing corporate interests, public interest groups and political elites that are concerned with outcomes from the technology production and use. One possibility to achieve balanced extractive processes would be the establishment of commonly agreed-upon standards on the governance of technology worldwide. The compliance of these standards can be assessed by the Assessment of Sustainability in Supply Chains Frameworks (ASSC). Hereby, the qualitative assessment consists of examining governance and social and environmental commitment. Indicators for the quantitative assessment are management systems and standards, compliance and social and environmental indicators. The initial phase of electric vehicle production incurs an environmental cost, often referred to as a "carbon debt", primarily driven by the energy-intensive manufacturing of high-voltage batteries and the extraction of critical raw materials. Rare-earth metals (neodymium, dysprosium) and other mined metals (copper, nickel, iron) are used by EV motors, while lithium, cobalt, manganese are used by the batteries. In 2023 the US State Department said that the supply of lithium would need to increase 42-fold by 2050 globally to support a transition to clean energy. Most of the lithium-ion battery production occurs in China, where the bulk of energy used is supplied by coal-burning power plants. The extraction and processing of these metals contributes to habitat destruction and environmental degradation. For instance, the process of mining minerals such as lithium and cobalt, essential components of current battery chemistries, carries significant localized environmental hazards. Lithium mining, frequently conducted using water-intensive brine extraction, contributes to global carbon emissions, estimated at over 1.3 million tonnes of carbon annually, with every tonne of mined lithium equating to 15 tonnes of CO2 released into the atmosphere. In regions rich in cobalt, such as the Democratic Republic of Congo (DRC), environmental costs are substantial, including deforestation, habitat destruction and water pollution. Scientists have noted high radioactivity levels in some mining regions, and industrial processes, including the pulverization of rock, release dust that causes respiratory health issues for nearby populations. Open-pit nickel mining has led to environmental degradation and pollution in developing countries such as the Philippines and Indonesia. In 2024, nickel mining and processing was one of the main causes of deforestation in Indonesia. In 2022, the manufacturing of an EV emitted on average around 50% more CO2 than an equivalent internal combustion engine vehicle, but this difference is more than offset by the much higher emissions from the oil used in driving an internal combustion engine Vehicle over its lifetime compared to those from generating the electricity used for driving the EV. In 2023, Greenpeace issued a video criticizing the view that EVs are "silver bullet for climate", arguing that the construction phase has a high environmental impact. For example, the rise in SUV sales by Hyundai almost eliminate the climate benefits of passing to EV in this company, because even electric SUVs have a high carbon footprint as they consume much raw materials and energy during construction. Greenpeace proposes a mobility as a service concept instead, based on biking, public transport and ride sharing. Despite the initial manufacturing footprint, a life-cycle assessment (LCA) approach consistently confirms that electric vehicles yield superior overall lifetime greenhouse gas (GHG) performance compared to equivalent ICE vehicles. The extent of the environmental benefit is intrinsically linked to the carbon intensity of the electricity grid used to power the vehicle. In regions like China, battery electric vehicles currently achieve approximately 40% lower emissions compared to ICE vehicles over their full lifespan. However, in countries with high-intensity grids, such as India, the immediate advantage is more modest, resulting in only about 20% lower emissions (saving less than 10 tonnes of CO2 equivalent). This context is temporary, as significant efforts are underway globally to decarbonize electricity generation; for instance, the emissions intensity of India's grid is projected to fall by 60% by 2035, rapidly increasing the environmental benefit of electrification. An alternative method of sourcing essential battery materials being deliberated by the International Seabed Authority is deep sea mining, however carmakers are not using this as of 2023. Regulatory mechanisms, such as the EU Battery Regulation (Regulation (EU) 2023/1542) were introduced to reduce the environmental impact. It covers the entire battery life cycle, from design and production, "battery passports", to use and end-of-life management. There are also national policies like those in France, which cap subsidies based on vehicle production carbon intensity. EV 'tank-to-wheels' efficiency is about a factor of three higher than internal combustion engine vehicles. Energy is not consumed while the vehicle is stationary, unlike internal combustion engines which consume fuel while idling. In 2022, EVs enabled a net reduction of about 80 Mt of GHG emissions, on a well-to-wheels basis, and the net GHG benefit of EVs will increase over time as the electricity sector is decarbonised. Well-to-wheel efficiency of an EV has less to do with the vehicle itself and more to do with the method of electricity production. A particular EV would instantly become twice as efficient if electricity production were switched from fossil fuels to renewable energy, such as wind power, tidal power, solar power, and nuclear power. Thus, when "well-to-wheels" is cited, the discussion is no longer about the vehicle, but rather about the entire energy supply infrastructure – in the case of fossil fuels this should also include energy spent on exploration, mining, refining, and distribution. The lifecycle analysis of EVs shows that even when powered by the most carbon-intensive electricity in Europe, they emit less greenhouse gases than a conventional diesel vehicle. Electric vehicles may have shorter range compared to vehicles with internal combustion engines, which is why the electrification of long-distance transport, such as long-distance shipping, remains challenging. In 2022, the sales-weighted average range of small BEVs sold in the United States was nearly 350 km, while in France, Germany and the United Kingdom it was just under 300 km, compared to under 220 km in China. Electric vehicles typically carry a higher initial purchase price than comparable ICE vehicles. This elevated upfront cost constitutes a significant barrier to entry. While long-term financial analyses may favor EVs, the immediate capital outlay often dictates purchasing decisions, slowing the pace of the overall market transition. The higher initial price is often offset by superior total cost of ownership (TCO) over the vehicle's lifespan. Operational expenses for EVs are markedly lower. Advances in lithium-ion batteries, driven at first by the personal-use electronics industry, allow full-sized, highway-capable EVs to travel nearly as far on a single charge as conventional cars go on a single tank of gasoline. Lithium batteries have been made safe, can be recharged in minutes instead of hours (see recharging time), and now last longer than the typical vehicle (see lifespan). The production cost of these lighter, higher-capacity lithium-ion batteries is gradually decreasing as the technology matures and production volumes increase. Research is also underway to improve battery reuse and recycling, which would further reduce the environmental impact of batteries. Many companies and researchers are also working on newer battery technologies, including solid state batteries and alternate technologies. The risk of requiring an out-of-warranty battery replacement represents the greatest source of long-term financial uncertainty for many prospective EV retail owners. Despite consumer anxieties, actual battery replacement events are statistically rare, and modern EV batteries are demonstrating significantly greater durability than initially anticipated. Studies have confirmed that EV batteries can outlast the vehicle's lifetime with minimal degradation. The financial risk associated with future replacement is collapsing due to advancements in battery manufacturing and economics. Industry reports project that global market oversupply will persist through 2028, accelerating price reductions. Electric vehicle range and battery performance are negatively affected by extreme cold, as ambient temperatures necessitate diverting energy for cabin heating and maintaining optimal battery temperature. A comprehensive winter performance study by the Canadian Automobile Association (CAA) revealed that cold weather significantly impacts driving range, with vehicles experiencing reductions between 14% and 39% compared to their official estimates when operated at −15∘C. This quantifiable range loss presents a significant practical challenge for owners in cold climates. However, as the industry matures, increasing standardization and optimization of these thermal systems are expected to mitigate cold weather range anxiety. A heat pump system, capable of cooling the cabin during summer and heating it during winter, is an efficient way of heating and cooling EVs. For vehicles which are connected to the grid, battery EVs can be preheated, or cooled, with little or no need for battery energy, especially for short trips. Most new electric cars come with heat pumps as standard. Electric vehicle safety regulations have evolved significantly since the initial UN ECE Regulation 100. Current regulations focus on thermal runaway protection, with various international standards mandating advance warning systems and thermal propagation containment measures. Recent technological developments address thermal runaway concerns more proactively. Advanced fire protection materials for EV batteries have become a critical research area, with developments in ceramics, mica, aerogels, coatings, and phase change materials designed to prevent or delay thermal runaway propagation. Current regulations vary by region, with China being an early adopter of thermal runaway-specific requirements mandating prevention of fire or smoke exiting battery packs for five minutes after an event occurs. However, industry experts suggest longer escape times may be necessary for future regulations, with original equipment manufacturers targeting extended protection periods to pre-empt future regulatory requirements. Research published in the British Medical Journal indicates that electric cars hit pedestrians at twice the rate of petrol or diesel vehicles due to being quieter. The infrastructure for vehicle repairs after accidents is a concern for insurers and mechanics due to safety requirements. Although no fatalities have been reported in electric vehicle repair till year 2024, repairing the high voltage battery includes electric shock, arc flash and fire hazard. Batteries and other components must be carefully evaluated rather than being totally written off by insurers. A 2003 study in the United Kingdom found that "[p]ollution is most concentrated in areas where young children and their parents are more likely to live and least concentrated in areas to which the elderly tend to migrate," and that "those communities that are most polluted and which also emit the least pollution tend to be amongst the poorest in Britain." A 2019 UK study found that "households in the poorest areas emit the least NOx and PM, whilst the least poor areas emitted the highest, per km, vehicle emissions per household through having higher vehicle ownership, owning more diesel vehicles and driving further." The transport planner, Karel Martens, in a 2009 article warned that electric vehicles only solve the problem of emissions by cars while not solving or improving their impact on the amount of space used by cars or parking issues. Martens who is of the field of Transport Justice, also said that electric vehicles do not improve accessibility to people who do not own cars. Government incentivization The IEA suggests that taxing inefficient internal combustion engine vehicles could encourage adoption of EVs, with taxes raised being used to fund subsidies for EVs. Government procurement is sometimes used to encourage national EV manufacturers. Many countries will ban sales of fossil fuel vehicles between 2025 and 2040. Many governments offer incentives to promote the use of electric vehicles, with the goals of reducing air pollution and oil consumption. Some incentives intend to increase purchases of electric vehicles by offsetting the purchase price with a grant. Other incentives include lower tax rates or exemption from certain taxes, and investment in charging infrastructure. In the United States, federal tax credits are available for electric vehicle buyers to try and help lower the initial purchase cost. European countries like Norway and Germany offer tax exemptions and reduced registration fees to encourage EV adoption. Partnerships between EV manufacturers and utility companies have also provided incentives and sales on EV purchases to promote cleaner energy usage and transportation. Companies selling EVs have partnered with local electric utilities to provide large incentives on some electric vehicles. Infrastructure management With the increase in number of electric vehicles, it is necessary to create an appropriate number of charging stations to supply the increasing demand. While the deployment of public charging infrastructure is accelerating globally, the adoption rate of EVs risks outpacing network expansion, leading to potential future congestion. Experts concur that large-scale EV adoption will inevitably stress local distribution networks if charging is conducted randomly during peak hours of electricity demand. This unmanaged demand risks grid instability and necessitates proactive management from utilities. In the United States, charging ports saw quarterly increases between 4.6% and 6.3% in early 2024. However, projections indicate a measurable risk of insufficient density. In the US, the ratio of electric light-duty vehicles per public charging point is projected to climb dramatically from approximately 18:1 in 2023 to over 80:1 by 2035 . This sharply increasing ratio confirms that current deployment, while active, may be structurally insufficient to prevent charging queues unless aggressive government targets, such as the US objective of 500,000 public charging ports by 2030, are met and exceeded. Policy mandates are driving targeted deployment to alleviate infrastructure pressure. The US has announced subsidies to expand convenient charging, and countries like India have set requirements for installing chargers every 25 km along major highways. Logistical hurdles regarding charge times are being addressed by rapid advancements in charging technology. Commercially available DC fast charging stations currently deliver 250-350 kW, and regulatory frameworks, such as the EU's Alternative Fuels Infrastructure Regulation (AFIR), are preparing for the eventual commercialization of 1 MW charging stations. The transition to 1 MW charging, however, requires significant investment in both installation and grid upgrades. Since EVs can be plugged into the electric grid when not in use, battery-powered vehicles could reduce the need for dispatchable generation by feeding electricity into the grid from their batteries during periods of high demand and low supply (such as just after sunset) while doing most of their charging at night or midday, when there is unused generating capacity. This vehicle-to-grid (V2G) connection has the potential to reduce the need for new power plants, as long as vehicle owners do not mind reducing the life of their batteries, by being drained by the power company during peak demand. Electric vehicle parking lots can provide demand response. Current electricity infrastructure may need to cope with increasing shares of variable-output power sources such as wind and solar. This variability could be addressed by adjusting the speed at which EV batteries are charged, or possibly even discharged. Some concepts see battery exchanges and battery charging stations, much like gas/petrol stations today. These will require enormous storage and charging potentials, which could be manipulated to vary the rate of charging, and to output power during shortage periods, much as diesel generators are used for short periods to stabilize some national grids. In-development technologies Conventional electric double-layer capacitors (supercapacitors) continue to be developed to achieve higher energy densities while maintaining their characteristic fast charging capabilities and extended lifespans. Recent research has focused on solid-state supercapacitor configurations that eliminate liquid electrolytes, providing enhanced safety and design flexibility. Advanced developments include all-graphene oxide flexible solid-state supercapacitors with enhanced electrochemical performance, achieving areal capacitances of 14.5 mF cm⁻² among the highest values for any graphene-based supercapacitor. Recent breakthroughs include dual storage mechanism nanoscale solid-state lithium-ion supercapacitors utilizing atomic layer deposition-synthesized lithium phosphorus oxynitride (LiPON) as solid-state electrolyte, demonstrating capacitance densities of 500 nF·mm⁻² with excellent cycling stability over ten thousand cycles. High-performance solid-state supercapacitors have been developed using silicon electrodes with graphene interconnected networks, showing remarkable performance characteristics comparable to high-power carbon-based supercapacitors. Advanced hybrid designs include all-solid-state planar micro-supercapacitors based on 2D vanadium nitride nanosheets and cobalt hydroxide nanoflowers, achieving energy densities of 12.4 mWh cm⁻³ and power densities of 1,750 mW cm⁻³. Flexible solid-state supercapacitors operating across wide temperature ranges from -70 °C to 220 °C have been demonstrated using polycation-polybenzimidazole blend electrolytes doped with phosphoric acid. Solid-state batteries represent one of the most promising next-generation battery technologies, offering potential advantages over conventional lithium-ion batteries including higher energy density, faster charging, improved safety, and longer lifespan. According to a comprehensive review in Chemical Engineering Journal, all-solid-state lithium batteries utilizing solid electrolytes are regarded as the next generation of energy storage devices, with recent breakthroughs significantly accelerating their path toward commercial viability. The Fraunhofer ISI Solid-State Battery Roadmap 2035+, developed with contributions from more than 100 European experts, provides a comprehensive assessment of solid-state battery development potential over the next decade, benchmarking against established lithium-ion batteries. According to market analysis published in Scientific Talks, solid-state batteries are projected to reach mass production with costs of 140–175 USD per kWh by 2028–2030, depending on technological and manufacturing challenges. Recent commercial developments include Mercedes-Benz and Factorial Energy conducting road tests of semi-solid-state batteries in the EQS sedan, promising a 25% increase in range with energy densities of 391 watt-hours per kilogram. This represents the world's first integration of lithium-metal solid-state batteries into a production vehicle. However, according to IEEE Spectrum analysis, solid-state batteries face significant "production hell" challenges, with experts noting pointed skepticism toward current technical announcements and the engineering obstacles that lie ahead. Toyota continues to lead development efforts, targeting solid-state battery production by 2027–2028 with goals of 1,000 km range and 10-minute fast charging capabilities. The company claims recent technological advancements have overcome previous battery life trade-offs and switched focus to mass production readiness. Research published in ACS Energy Letters emphasizes that while all-solid-state batteries show promise for electric vehicles, significant challenges remain in Li-metal implementation, interfacial stability, and large-scale manufacturing. Sodium-ion batteries continue to show promise with potential energy densities of 400 Wh/kg and minimal expansion/contraction during charge cycles, while relying on more abundant and cost-effective materials than lithium-ion technology. Recent research published in Energy & Fuels highlights sodium-ion and all-solid-state sodium batteries as promising choices for future energy storage systems due to abundant sodium resources and lower costs compared to lithium-based systems. Another improvement is to decouple the electric motor from the battery through electronic control, using supercapacitors to buffer large but short power demands and regenerative braking energy. The development of new cell types combined with intelligent cell management improved both weak points mentioned above. The cell management involves not only monitoring the health of the cells but also a redundant cell configuration (one more cell than needed). With sophisticated switched wiring, it is possible to condition one cell while the rest are on duty.[citation needed] An electric road system (ERS) is a road which supplies electric power to vehicles travelling on it. Common implementations are overhead power lines above the road, ground-level power supply through conductive rails, and dynamic wireless power transfer (DWPT) through resonant inductive coils or inductive rails embedded in the road. Overhead power lines are limited to commercial vehicles while ground-level rails and inductive power transfer can be used by any vehicle, which allows for public charging through a power metering and billing systems. Of the three methods, ground-level conductive rails are estimated to be the most cost-effective.: 10–11 Government studies and trials have been conducted in several countries seeking a national electric road system (ERS) network. Korea was the first to implement an induction-based public electric road with a commercial bus line in 2013 after testing an experimental shuttle service in 2009,: 11–18 but it was shut down due to aging infrastructure amidst controversy over the continued public funding of the technology. United Kingdom municipal projects in 2015 and 2021 found wireless electric roads financially unfeasible. The Swedish Transport Administration electric road program started assessing electric road systems (ERS) in 2013.: 5 After receiving ERS construction offers in excess of the project's budget in 2023, Sweden pursued cost-reduction measures for either wireless ERS or rail ERS. The project's final report, published in 2024, recommended against funding a national ERS in Sweden as it would not be cost-effective, unless the technology was adopted by its trading partners such as by France and Germany. Following the report, the project was paused. Germany found in 2023 that the wireless electric road system (wERS) by Electreon collects 64.3% of the transmitted energy, poses many difficulties during installation, and blocks access to other infrastructure in the road. Germany trialed overhead lines in three projects in the 2010s and 2020s and reported they are too expensive, difficult to maintain, and pose a safety risk. France found similar drawbacks for overhead lines as Germany did. France began several electric road pilot projects in 2023 for inductive and rail systems. Ground-level power supply systems are considered the most likely candidates. Records See also References Further reading External links Media related to Electrically powered vehicles at Wikimedia Commons |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Judah_ha-Nasi] | [TOKENS: 4277] |
Contents Judah ha-Nasi Judah ha-Nasi (Hebrew: יְהוּדָה הַנָּשִׂיא, Yəhūḏā hanNāsīʾ; Yehudah HaNasi or Judah the Prince or Judah the President) or Judah I, known simply as Rebbi or Rabbi, was a second-century rabbi (a tanna of the fifth generation) and chief redactor and editor of the Mishnah. He lived from approximately 135 to 217 CE. He was a key leader of the Jewish community in Roman-occupied Judea after the Bar Kokhba revolt. Name and titles The title nasi was used for presidents of the Sanhedrin. He was the first nasi to have this title added permanently to his name; in traditional literature he is usually called "Rabbi Yehuda ha-Nasi." Often though (and always in the Mishnah) he is simply called Rabbi "my teacher" (רבי), the master par excellence. He is occasionally called Rabbenu "our master". He is also called "Rabbenu HaQadosh" "our holy master" (רבנו הקדוש) due to his deep piety. Biography Judah was born in 135 in the newly-established Roman province of Syria Palaestina to Simeon ben Gamaliel II. According to the Talmud, he was of the Davidic line. He is said to have been born on the same day that Rabbi Akiva died as a martyr. The Talmud suggests that this was a result of divine providence: God had granted the Jewish people another leader of great stature to succeed Akiva. His place of birth is unknown. Judah spent his youth in the city of Usha in the Lower Galilee. His father presumably gave him the same education that he had received, including Koine Greek. This knowledge of Greek enabled him to become the Jews' intermediary with the Roman authorities. He favoured Greek as the language of the country over Jewish Palestinian Aramaic. In Judah's house, only the Hebrew language was spoken, and the maids of the house became known for their use of obscure Hebrew terminology. Judah devoted himself to the study of the oral and the written law. He studied under some of Akiva's most eminent students. As their student and through conversation with other prominent men who gathered about his father, he laid a strong foundation of scholarship for his life's work: the editing of the Mishnah. His teacher at Usha was Judah bar Ilai, who was officially employed in the house of the patriarch as judge in religious and legal questions. In later years, Judah described how in his childhood he read the Book of Esther at Usha in the presence of Judah bar Ilai. Judah felt especial reverence for Jose ben Halafta, the student of Akiva's who had the closest relations with Simon ben Gamaliel. When, in later years, Judah raised objections to Jose's opinions, he would say: "We poor ones undertake to attack Jose, though our time compares with his as the profane with the holy!" Judah hands down a halakhah by Jose in Menachot 14a. Judah studied from Shimon bar Yochai in Teqoa, a place some have identified with Meron. He also studied with Eleazar ben Shammua. Judah did not study with Rabbi Meir, evidently in consequence of the conflicts which distanced Meir from the house of the patriarch. However, he considered himself lucky even to have seen Meir from behind. Another of Judah's teachers was Nathan the Babylonian, who also took a part in the conflict between Meir and the patriarch; Judah confessed that once, in a fit of youthful ardour, he had failed to treat Nathan with due reverence. In both halakhic and aggadic tradition, Judah's opinion is often opposed to Nathan's. In the Jerusalemite tradition, Judah ben Korshai (the halakhic specialist mentioned as assistant to Simon ben Gamaliel) is designated as Judah's real teacher. Jacob ben Hanina (possibly the R. Jacob whose patronymic is not given and in whose name Judah quotes halakhic sentences) is also mentioned as one of Judah's teachers, and is said to have asked him to repeat halakhic sentences. Judah was also taught by his father (Simon ben Gamaliel); when the two differed on a halakhic matter, the father was generally stricter. Judah himself says: "My opinion seems to me more correct than that of my father"; and he then proceeds to give his reasons. Humility was a virtue ascribed to Judah, and he admired it greatly in his father, who openly recognised Shimon bar Yochai's superiority, thus displaying the same modesty as the Bnei Bathyra when they gave way to Hillel, and as Jonathan when he voluntarily gave precedence to his friend David. Nothing is known regarding the time when Judah succeeded his father as leader of the Jews remaining in Eretz Yisrael. According to Rashi, Judah's father Simon had served as the nasi or head of the Sanhedrin in Usha before it moved to Shefar'am (now Shefa-'Amr). According to a tradition, the country at the time of Simon ben Gamaliel's death not only was devastated by a plague of locusts but suffered many other hardships. From Shefar'am, the Sanhedrin transferred to Beit Shearim (now part of the Beit She'arim necropolis), where the Sanhedrin was headed by Judah. Here he officiated for a long time. Eventually, Judah moved with the court from Beit Shearim to Sepphoris, where he spent at least 17 years of his life. Judah chose Sepphoris chiefly because of his ill health would improve in its high altitude and pure air. However, Judah's memorial as a leader is principally associated with Bet She'arim: "The Sages taught: The verse states: “Justice, justice, shall you follow.” This teaches that one should follow the Sages to the academy where they are found. For example [...] after Rabbi Yehuda HaNasi to Beit She’arim[.]" Among Judah's contemporaries in the early years of his activity were Eleazar ben Simeon, Ishmael ben Jose, Jose ben Judah, and Simeon ben Eleazar. His better-known contemporaries and students include Simon b. Manasseh, Pinchas ben Yair, Eleazar ha-Kappar and his son Bar Kappara, Hiyya the Great, Shimon ben Halafta, and Levi ben Sisi. Among his students who taught as the first generation of Amoraim after his death are: Hanina bar Hama and Hoshaiah Rabbah in Eretz Yisrael, Abba Arikha and Samuel of Nehardea in Babylon (the Jewish term for Lower Mesopotamia). Only scattered records of Judah's official activity exist. These include: the ordination of his students; the recommendation of students for communal offices; orders relating to the announcement of the new moon; amelioration of the law relating to the Sabbatical year; and to decrees relating to tithes in the frontier districts of Eretz Yisrael. The last-named he was obliged to defend against the opposition of the members of the patriarchal family. The ameliorations he intended for Tisha B'Av were prevented by the college. Many religious and legal decisions are recorded as having been rendered by Judah together with his court, the college of scholars. According to the Talmud, Rabbi Judah HaNasi was very wealthy and greatly revered in Rome. He had a close friendship with "Antoninus", possibly the Emperor Antoninus Pius, though it is more likely his famous friendship was with either Emperor Marcus Aurelius Antoninus or Antoninus who is also called Caracalla and who would consult Judah on various worldly and spiritual matters. Jewish sources tell of various discussions between Judah and Antoninus. These include the parable of the blind and the lame (illustrating the judgment of the body and the soul after death), and a discussion of the impulse to sin. The authority of Judah's office was enhanced by his wealth, which is referred to in various traditions. In Babylon, the hyperbolic statement was later made that even his stable-master was wealthier than King Shapur. His household was compared to that of the emperor. Simeon ben Menasya praised Judah by saying that he and his sons united in themselves beauty, power, wealth, wisdom, age, honour, and the blessings of children. During a famine, Judah opened his granaries and distributed corn among the needy. But he denied himself the pleasures procurable by wealth, saying: "Whoever chooses the delights of this world will be deprived of the delights of the next world; whoever renounces the former will receive the latter". The year of Judah's death is deduced from the statement that his student Abba Arikha left Eretz Yisrael for good not long before Judah's death, in year 530 of the Seleucid era (219 CE). He assumed the office of patriarch during the reign of Marcus Aurelius and Lucius Verus (c. 165). Hence Judah, having been born about 135, became patriarch at the age of 30, and died at the age of about 85. The Talmud notes that Rabbi Judah the Prince lived for at least 17 years in Sepphoris, and that he applied unto himself the biblical verse, "And Jacob lived in the land of Egypt seventeen years" (Genesis 47:28). According to a different calculation, he died on 15 Kislev, AM 3978 (around December 1, 217 CE), in Sepphoris, and his body was interred in the necropolis of Beit Shearim, 15.2 kilometres (9.4 mi) distant from Sepphoris, during whose funeral procession they made eighteen stops at different stations along the route to eulogise him. It is said that when Judah died, no one had the heart to announce his demise to the anxious people of Sepphoris, until the clever Bar Ḳappara broke the news in a parable, saying: "The heavenly host and earth-born men held the tablets of the covenant; then the heavenly host was victorious and seized the tablets." Judah's eminence as a scholar, who gave to this period its distinctive impression, was characterised at an early date by the saying that since the time of Moses, the Torah and greatness (i.e. knowledge and rank) were united in no one to the same extent as in Judah I. Two of Judah's sons assumed positions of authority after his death: Gamaliel succeeded him as nasi, while Shimon became hakham of his yeshiva. According to some Midrashic and Kabbalistic legends, Judah ha-Nasi had a son named Yaavetz who ascended to Heaven without experiencing death. Various stories are told about Judah, illustrating different aspects of his character. It is said that once he saw a calf being led to the slaughtering-block, which looked at him with tearful eyes, as if seeking protection. He said to it: "Go; for you were created for this purpose!" Due to this unkind attitude toward the suffering animal, he was punished with years of illness. Later, when his maid was about to kill some small animals which were in their house, he said to her: "Let them live, for it is written: '[God's] tender mercies are over all his works'." After this demonstration of compassion, his illness ceased. Judah also once said, "One who is ignorant of the Torah should not eat meat." The prayer he prescribed upon eating meat or eggs also indicates an appreciation of animal life: "Blessed be the Lord who has created many souls, in order to support by them the soul of every living being." He exclaimed, sobbing, in reference to three different stories of martyrs whose deaths made them worthy of future life: "One man earns his world in an hour, while another requires many years". He began to weep when Elisha ben Abuyah's daughters, who were soliciting alms, reminded him of their father's learning. In a legend relating to his meeting with Pinchas ben Yair, he is described as tearfully admiring the pious Pinchas' unswerving steadfastness, protected by a higher power. He was frequently interrupted by tears when explaining Lamentations 2:2 and illustrating the passage by stories of the destruction of Jerusalem and of the Temple. While explaining certain passages of Scripture, he was reminded of divine judgment and of the uncertainty of acquittal, and began to cry. Hiyya found him weeping during his last illness because death was about to deprive him of the opportunity of studying the Torah and of fulfilling the commandments. Once, when at a meal his students expressed their preference for soft tongue, he made this an opportunity to say, "May your tongues be soft in your mutual intercourse" (i.e., "Speak gently without disputing"). Before he died, Judah said: "I need my sons! ... Let the lamp continue to burn in its usual place; let the table be set in its usual place; let the bed be made in its usual place." While teaching Torah, Judah would often interrupt the lesson to recite the Shema Yisrael. He passed his hand over his eyes as he said it. When 70-year-old wine cured him of a protracted illness, he prayed: "Blessed be the Lord, who has given His world into the hands of guardians". He privately recited daily the following supplication on finishing the obligatory prayers: "May it be Thy will, my God and the God of my fathers, to protect me against the impudent and against impudence, from bad men and bad companions, from severe sentences and severe plaintiffs, whether a son of the covenant or not." Rabbi Judah ben Samuel of Regensburg relates that the spirit of Rebbi Judah used to visit his home, wearing Shabbat clothes, every Friday evening at dusk. He would recite Kiddush, and others would thereby discharge their obligation to hear Kiddush. One Friday night there was a knock at the door. "Sorry," said the maid, "I can't let you in just now because Rabbeinu HaKadosh is in the middle of Kiddush." From then on Judah stopped coming, since he did not want his coming to become public knowledge. Teachings According to Rabbinical Jewish tradition, God gave both the Written Law (the Torah) and the Oral Law to Moses on biblical Mount Sinai. The Oral Law is the oral tradition as relayed by God to Moses and from him, transmitted and taught to the sages (rabbinic leaders) of each subsequent generation. For centuries, the Torah appeared only as a written text transmitted in parallel with the oral tradition. Fearing that the oral traditions might be forgotten, Judah undertook the mission of consolidating the various opinions into one body of law which became known as the Mishnah. This completed a project which had been mostly clarified and organised by his father and Nathan the Babylonian. The Mishnah consists of 63 tractates codifying Jewish law, which are the basis of the Talmud. According to Abraham ben David, the Mishnah was compiled by Rabbi Judah the Prince in 3949 AM, or the year 500 of the Seleucid era, which corresponds to 189 CE. The Mishnah contains many of Judah's own sentences, which are introduced by the words, "Rabbi says." The Mishnah was Judah's work, although it includes a few sentences by his son and successor, Gamaliel III, perhaps written after Judah's death. Both the Talmuds assume as a matter of course that Judah is the originator of the Mishnah—"our Mishnah," as it was called in Babylon—and the author of the explanations and discussions relating to its sentences. However, Judah is more correctly considered redactor of the Mishnah, rather than its author. The Mishnah is based on the systematic division of the halakhic material as formulated by Rabbi Akiva; Judah following in his work the arrangement of the halakot as taught by Rabbi Meir (Akiva's foremost student). Using the precedent of Rabbi Meir's reported actions, Judah ruled the Beit Shean region to be exempt from the requirements of tithing and shmita regarding produce grown there. He also did the same for the cities of Kefar Tzemach, Caesarea and Beit Gubrin. He forbade his students to study in the marketplace, basing his prohibition on his interpretation of Song of Songs 7:2, and censured one of his students who violated this restriction. His exegesis includes many attempts to harmonise conflicting Biblical statements. Thus he harmonises the contradictions between Genesis 15:13 ("400 years") and 15:16 ("the fourth generation"); Exodus 20:16 and Deuteronomy 5:18; Numbers 9:23, 10:35 and ib., Deuteronomy 14:13 and Leviticus 11:14. The contradiction between Genesis 1:25 (which lists 3 categories of created beings) and 1:24 (which adds a fourth category, the "living souls") Judah explains by saying that this expression designates the demons, for whom God did not create bodies because the Sabbath had come. Noteworthy among the other numerous Scriptural interpretations which have been handed down in Judah's name are his clever etymological explanations, for example: Exodus 19:8-9; Leviticus 23:40; Numbers 15:38; II Samuel 17:27; Joel 1:17; Psalms 68:7. He interpreted the words "to do the evil" in II Samuel 12:9 to mean that David did not really sin with Bathsheba, but only intended to do so. As she was actually divorced at the time he took her. Abba Arikha, Judah's student, ascribes this apology for King David to Judah's desire to justify his ancestor. A sentence praising King Hezekiah and an extenuating opinion of King Ahaz have also been handed down in Judah's name. Characteristic of Judah's appreciation of aggadah is his interpretation of the word "vayagged" (Exodus 19:9) to the effect that the words of Moses attracted the hearts of his hearers, like the aggadah does. Once when the audience was falling asleep in his lecture, he made a ludicrous statement in order to revive their interest, and then explained the statement to be accurate in a metaphorical sense. Judah was especially fond of the Book of Psalms. He paraphrased the psalmist's wish "Let the words of my mouth ... be acceptable in thy sight," thus: "May the Psalms have been composed for the coming generations; may they be written down for them; and may those that read them be rewarded like those that study halakhic sentences". He said that the Book of Job was important if only because it presented the sin and punishment of the generations of the Flood. He proves from Exodus 16:35 that there is no chronological order in the Torah. Referring to the prophetic books, he says: "All the Prophets begin with denunciations and end with comfortings". Even the genealogical portions of the Book of Chronicles must be interpreted. It appears that there was an aggadic collection containing Judah's answers to exegetical questions. Among these questions may have been the one which Judah's son Simeon addressed to him. References This article incorporates text from a publication now in the public domain: Solomon Schechter; Wilhelm Bacher (1901–1906). "Judah I". In Singer, Isidore; et al. (eds.). The Jewish Encyclopedia. New York: Funk & Wagnalls. |
======================================== |
[SOURCE: https://en.wikipedia.org/w/index.php?title=Grey_alien&action=edit§ion=3] | [TOKENS: 1430] |
Editing Grey alien (section) Copy and paste: – — ° ′ ″ ≈ ≠ ≤ ≥ ± − × ÷ ← → · § Cite your sources: <ref></ref> {{}} {{{}}} | [] [[]] [[Category:]] #REDIRECT [[]] <s></s> <sup></sup> <sub></sub> <code></code> <pre></pre> <blockquote></blockquote> <ref></ref> <ref name="" /> {{Reflist}} <references /> <includeonly></includeonly> <noinclude></noinclude> {{DEFAULTSORT:}} <nowiki></nowiki> <!-- --> <span class="plainlinks"></span> Symbols: ~ | ¡ ¿ † ‡ ↔ ↑ ↓ • ¶ # ∞ ‹› «» ¤ ₳ ฿ ₵ ¢ ₡ ₢ $ ₫ ₯ € ₠ ₣ ƒ ₴ ₭ ₤ ℳ ₥ ₦ ₧ ₰ £ ៛ ₨ ₪ ৳ ₮ ₩ ¥ ♠ ♣ ♥ ♦ 𝄫 ♭ ♮ ♯ 𝄪 © ¼ ½ ¾ Latin: A a Á á À à  â Ä ä Ǎ ǎ Ă ă Ā ā à ã Å å Ą ą Æ æ Ǣ ǣ B b C c Ć ć Ċ ċ Ĉ ĉ Č č Ç ç D d Ď ď Đ đ Ḍ ḍ Ð ð E e É é È è Ė ė Ê ê Ë ë Ě ě Ĕ ĕ Ē ē Ẽ ẽ Ę ę Ẹ ẹ Ɛ ɛ Ǝ ǝ Ə ə F f G g Ġ ġ Ĝ ĝ Ğ ğ Ģ ģ H h Ĥ ĥ Ħ ħ Ḥ ḥ I i İ ı Í í Ì ì Î î Ï ï Ǐ ǐ Ĭ ĭ Ī ī Ĩ ĩ Į į Ị ị J j Ĵ ĵ K k Ķ ķ L l Ĺ ĺ Ŀ ŀ Ľ ľ Ļ ļ Ł ł Ḷ ḷ Ḹ ḹ M m Ṃ ṃ N n Ń ń Ň ň Ñ ñ Ņ ņ Ṇ ṇ Ŋ ŋ O o Ó ó Ò ò Ô ô Ö ö Ǒ ǒ Ŏ ŏ Ō ō Õ õ Ǫ ǫ Ọ ọ Ő ő Ø ø Œ œ Ɔ ɔ P p Q q R r Ŕ ŕ Ř ř Ŗ ŗ Ṛ ṛ Ṝ ṝ S s Ś ś Ŝ ŝ Š š Ş ş Ș ș Ṣ ṣ ß T t Ť ť Ţ ţ Ț ț Ṭ ṭ Þ þ U u Ú ú Ù ù Û û Ü ü Ǔ ǔ Ŭ ŭ Ū ū Ũ ũ Ů ů Ų ų Ụ ụ Ű ű Ǘ ǘ Ǜ ǜ Ǚ ǚ Ǖ ǖ V v W w Ŵ ŵ X x Y y Ý ý Ŷ ŷ Ÿ ÿ Ỹ ỹ Ȳ ȳ Z z Ź ź Ż ż Ž ž ß Ð ð Þ þ Ŋ ŋ Ə ə Greek: Ά ά Έ έ Ή ή Ί ί Ό ό Ύ ύ Ώ ώ Α α Β β Γ γ Δ δ Ε ε Ζ ζ Η η Θ θ Ι ι Κ κ Λ λ Μ μ Ν ν Ξ ξ Ο ο Π π Ρ ρ Σ σ ς Τ τ Υ υ Φ φ Χ χ Ψ ψ Ω ω {{Polytonic|}} Cyrillic: А а Б б В в Г г Ґ ґ Ѓ ѓ Д д Ђ ђ Е е Ё ё Є є Ж ж З з Ѕ ѕ И и І і Ї ї Й й Ј ј К к Ќ ќ Л л Љ љ М м Н н Њ њ О о П п Р р С с Т т Ћ ћ У у Ў ў Ф ф Х х Ц ц Ч ч Џ џ Ш ш Щ щ Ъ ъ Ы ы Ь ь Э э Ю ю Я я ́ IPA: t̪ d̪ ʈ ɖ ɟ ɡ ɢ ʡ ʔ ɸ β θ ð ʃ ʒ ɕ ʑ ʂ ʐ ç ʝ ɣ χ ʁ ħ ʕ ʜ ʢ ɦ ɱ ɳ ɲ ŋ ɴ ʋ ɹ ɻ ɰ ʙ ⱱ ʀ ɾ ɽ ɫ ɬ ɮ ɺ ɭ ʎ ʟ ɥ ʍ ɧ ʼ ɓ ɗ ʄ ɠ ʛ ʘ ǀ ǃ ǂ ǁ ɨ ʉ ɯ ɪ ʏ ʊ ø ɘ ɵ ɤ ə ɚ ɛ œ ɜ ɝ ɞ ʌ ɔ æ ɐ ɶ ɑ ɒ ʰ ʱ ʷ ʲ ˠ ˤ ⁿ ˡ ˈ ˌ ː ˑ ̪ {{IPA|}} This page is a member of 13 hidden categories (help): |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Minecraft#cite_ref-:222_204-0] | [TOKENS: 12858] |
Contents Minecraft Minecraft is a sandbox game developed and published by Mojang Studios. Following its initial public alpha release in 2009, it was formally released in 2011 for personal computers. The game has since been ported to numerous platforms, including mobile devices and various video game consoles. In Minecraft, players explore a procedurally generated world with virtually infinite terrain made up of voxels (cubes). They can discover and extract raw materials, craft tools and items, build structures, fight hostile mobs, and cooperate with or compete against other players in multiplayer. The game's large community offers a wide variety of user-generated content, such as modifications, servers, player skins, texture packs, and custom maps, which add new game mechanics and possibilities. Originally created by Markus "Notch" Persson using the Java programming language, Jens "Jeb" Bergensten was handed control over the game's development following its full release. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for US$2.5 billion; Xbox Game Studios hold the publishing rights for the Bedrock Edition, the unified cross-platform version which evolved from the Pocket Edition codebase[i] and replaced the legacy console versions. Bedrock is updated concurrently with Mojang's original Java Edition, although with numerous, generally small, differences. Minecraft is the best-selling video game in history with over 350 million copies sold. It has received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions have played prominent roles in popularizing it. The wider Minecraft franchise includes several spin-off games, such as Minecraft: Story Mode, Minecraft Dungeons, and Minecraft Legends. A film adaptation, titled A Minecraft Movie, was released in 2025 and became the second highest-grossing video game film of all time. Gameplay Minecraft is a 3D sandbox video game that has no required goals to accomplish, giving players a large amount of freedom in choosing how to play the game. The game features an optional achievement system. Gameplay is in the first-person perspective by default, but players have the option of third-person perspectives. The game world is composed of rough 3D objects—mainly cubes, referred to as blocks—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a voxel grid, while players can move freely around the world. Players can break, or mine, blocks and then place them elsewhere, enabling them to build things. Very few blocks are affected by gravity, instead maintaining their voxel position in the air. Players can also craft a wide variety of items, such as armor, which mitigates damage from attacks; weapons (such as swords or bows and arrows), which allow monsters and animals to be killed more easily; and tools (such as pickaxes or shovels), which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. They may also freely craft helpful blocks—such as furnaces which can cook food and smelt ores, and torches that produce light—or exchange items with villagers (NPC) through trading emeralds for different goods and vice versa. The game has an inventory system, allowing players to carry a limited number of items. The in-game time system follows a day and night cycle, with one full cycle lasting for 20 real-time minutes. The game also contains a material called redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems. New players are given a randomly selected default character skin out of nine possibilities, including Steve or Alex, but are able to create and upload their own skins. Players encounter various mobs (short for mobile entities) including animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, spawn during the daytime and can be hunted for food and crafting materials, while hostile mobs—including large spiders, witches, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies and skeletons, burn under the sun if they have no headgear and are not standing in water. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk and drowned variants that spawn in deserts and oceans, respectively. The Minecraft environment is procedurally generated as players explore it using a map seed that is randomly chosen at the time of world creation (or manually specified by the player). Divided into biomes representing different environments with unique resources and structures, worlds are designed to be effectively infinite in traditional gameplay, though technical limits on the player have existed throughout development, both intentionally and not. Implementation of horizontally infinite generation initially resulted in a glitch termed the "Far Lands" at over 12 million blocks away from the world center, where terrain generated as wall-like, fissured patterns. The Far Lands and associated glitches were considered the effective edge of the world until they were resolved, with the current horizontal limit instead being a special impassable barrier called the world border, located 30 million blocks away. Vertical space is comparatively limited, with an unbreakable bedrock layer at the bottom and a building limit several hundred blocks into the sky. Minecraft features three independent dimensions accessible through portals and providing alternate game environments. The Overworld is the starting dimension and represents the real world, with a terrestrial surface setting including plains, mountains, forests, oceans, caves, and small sources of lava. The Nether is a hell-like underworld dimension accessed via an obsidian portal and composed mainly of lava. Mobs that populate the Nether include shrieking, fireball-shooting ghasts, alongside anthropomorphic pigs called piglins and their zombified counterparts. Piglins in particular have a bartering system, where players can give them gold ingots and receive items in return. Structures known as Nether Fortresses generate in the Nether, containing mobs such as wither skeletons and blazes, which can drop blaze rods needed to access the End dimension. The player can also choose to build an optional boss mob known as the Wither, using skulls obtained from wither skeletons and soul sand. The End can be reached through an end portal, consisting of twelve end portal frames. End portals are found in underground structures in the Overworld known as strongholds. To find strongholds, players must craft eyes of ender using an ender pearl and blaze powder. Eyes of ender can then be thrown, traveling in the direction of the stronghold. Once the player reaches the stronghold, they can place eyes of ender into each portal frame to activate the end portal. The dimension consists of islands floating in a dark, bottomless void. A boss enemy called the Ender Dragon guards the largest, central island. Killing the dragon opens access to an exit portal, which, when entered, cues the game's ending credits and the End Poem, a roughly 1,500-word work written by Irish novelist Julian Gough, which takes about nine minutes to scroll past, is the game's only narrative text, and the only text of significant length directed at the player.: 10–12 At the conclusion of the credits, the player is teleported back to their respawn point and may continue the game indefinitely. In Survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter in order to survive at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game unless the player is playing on peaceful difficulty. If the hunger bar is empty, the player starves. Health replenishes when players have a full hunger bar or continuously on peaceful. Upon losing all health, players die. The items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game and can be changed by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they despawn after 5 minutes. Players may acquire experience points (commonly referred to as "xp" or "exp") by killing mobs and other players, mining, smelting ores, animal breeding, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects. The game features two more game modes based on Survival, known as Hardcore mode and Adventure mode. Hardcore mode plays identically to Survival mode, but with the game's difficulty setting locked to "Hard" and with permadeath, forcing them to delete the world or explore it as a spectator after dying. Adventure mode was added to the game in a post-launch update, and prevents the player from directly modifying the game's world. It was designed primarily for use in custom maps, allowing map designers to let players experience it as intended. In Creative mode, players have access to an infinite number of all resources and items in the game through the inventory menu and can place or mine them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters usually do not take any damage nor are affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance. Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, local area network (LAN) play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own server by making a realm, using a host provider, hosting one themselves or connect directly to another player's game via Xbox Live, PlayStation Network or Nintendo Switch Online. Single-player worlds have LAN support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use server addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3,000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Bedrock Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018. The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, items, and mobs to entire arrays of mechanisms. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as mini-maps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) that often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new achievements, dimensions, functions, loot tables, predicates, recipes, structures, tags, and world generation. The Xbox 360 Edition supported downloadable content, which was available to purchase via the Xbox Games Store; these content packs usually contained additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combined texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition did not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released exclusively for the Wii U Edition worldwide on 17 May 2016, and later bundled free with the Nintendo Switch Edition at launch. Another based on Fallout was released on consoles that December, and for Windows and Mobile in April 2017. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would run only when the image containing the skin itself was opened. In June 2017, Mojang released the "1.1 Discovery Update" to the Pocket Edition of the game, which later became the Bedrock Edition. The update introduced the "Marketplace", a catalogue of purchasable user-generated content intended to give Minecraft creators "another way to make a living from the game". Various skins, maps, texture packs and add-ons from different creators can be bought with "Minecoins", a digital currency that is purchased with real money. Additionally, users can access specific content with a subscription service titled "Marketplace Pass". Alongside content from independent creators, the Marketplace also houses items published by Mojang and Microsoft themselves, as well as official collaborations between Minecraft and other intellectual properties. By 2022, the Marketplace had over 1.7 billion content downloads, generating over $500 million in revenue. Development Before creating Minecraft, Markus "Notch" Persson was a game developer at King, where he worked until March 2009. At King, he primarily developed browser games and learned several programming languages. During his free time, he prototyped his own games, often drawing inspiration from other titles, and was an active participant on the TIGSource forums for independent developers. One such project was "RubyDung", a base-building game inspired by Dwarf Fortress, but with an isometric, three-dimensional perspective similar to RollerCoaster Tycoon. Among the features in RubyDung that he explored was a first-person view similar to Dungeon Keeper, though he ultimately discarded this idea, feeling the graphics were too pixelated at the time. Around March 2009, Persson left King and joined jAlbum, while continuing to work on his prototypes. Infiniminer, a block-based open-ended mining game first released in April 2009, inspired Persson's vision for RubyDung's future direction. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements. The first public alpha build of Minecraft was released on 17 May 2009 on TIGSource. Over the years, Persson regularly released test builds that added new features, including tools, mobs, and entire new dimensions. In 2011, partly due to the game's rising popularity, Persson decided to release a full 1.0 version—a second part of the "Adventure Update"—on 18 November 2011. Shortly after, Persson stepped down from development, handing the project's lead to Jens "Jeb" Bergensten. On 15 September 2014, Microsoft, the developer behind the Microsoft Windows operating system and Xbox video game console, announced a $2.5 billion acquisition of Mojang, which included the Minecraft intellectual property. Persson had suggested the deal on Twitter, asking a corporation to buy his stake in the game after receiving criticism for enforcing terms in the game's end-user license agreement (EULA), which had been in place for the past three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014 and led to Persson becoming one of Forbes' "World's Billionaires". After 2014, Minecraft's primary versions received usually annual major updates—free to players who have purchased the game— each primarily centered around a specific theme. For instance, version 1.13, the Update Aquatic, focused on ocean-related features, while version 1.16, the Nether Update, introduced significant changes to the Nether dimension. However, in late 2024, Mojang announced a shift in their update strategy; rather than releasing large updates annually, they opted for a more frequent release schedule with smaller, incremental updates, stating, "We know that you want new Minecraft content more often." The Bedrock Edition has also received regular updates, now matching the themes of the Java Edition updates. Other versions of the game, such as various console editions and the Pocket Edition, were either merged into Bedrock or discontinued and have not received further updates. On 7 May 2019, coinciding with Minecraft's 10th anniversary, a JavaScript recreation of an old 2009 Java Edition build named Minecraft Classic was made available to play online for free. On 16 April 2020, a Bedrock Edition-exclusive beta version of Minecraft, called Minecraft RTX, was released by Nvidia. It introduced physically-based rendering, real-time path tracing, and DLSS for RTX-enabled GPUs. The public release was made available on 8 December 2020. Path tracing can only be enabled in supported worlds, which can be downloaded for free via the in-game Minecraft Marketplace, with a texture pack from Nvidia's website, or with compatible third-party texture packs. It cannot be enabled by default with any texture pack on any world. Initially, Minecraft RTX was affected by many bugs, display errors, and instability issues. On 22 March 2025, a new visual mode called Vibrant Visuals, an optional graphical overhaul similar to Minecraft RTX, was announced. It promises modern rendering features—such as dynamic shadows, screen space reflections, volumetric fog, and bloom—without the need of RTX-capable hardware. Vibrant Visuals was released as a part of the Chase the Skies update on 17 June 2025 for Bedrock Edition and is planned to release on Java Edition at a later date. Development began for the original edition of Minecraft—then known as Cave Game, and now known as the Java Edition—in May 2009,[k] and ended on 13 May, when Persson released a test video on YouTube of an early version of the game, dubbed the "Cave game tech test" or the "Cave game tech demo". The game was named Minecraft: Order of the Stone the next day, after a suggestion made by a player. "Order of the Stone" came from the webcomic The Order of the Stick, and "Minecraft" was chosen "because it's a good name". The title was later shortened to just Minecraft, omitting the subtitle. Persson completed the game's base programming over a weekend in May 2009, and private testing began on TigIRC on 16 May. The first public release followed on 17 May 2009 as a developmental version shared on the TIGSource forums. Based on feedback from forum users, Persson continued updating the game. This initial public build later became known as Classic. Further developmental phases—dubbed Survival Test, Indev, and Infdev—were released throughout 2009 and 2010. The first major update, known as Alpha, was released on 30 June 2010. At the time, Persson was still working a day job at jAlbum but later resigned to focus on Minecraft full-time as sales of the alpha version surged. Updates were distributed automatically, introducing new blocks, items, mobs, and changes to game mechanics such as water flow. With revenue generated from the game, Persson founded Mojang, a video game studio, alongside former colleagues Jakob Porser and Carl Manneh. On 11 December 2010, Persson announced that Minecraft would enter its beta phase on 20 December. He assured players that bug fixes and all pre-release updates would remain free. As development progressed, Mojang expanded, hiring additional employees to work on the project. The game officially exited beta and launched in full on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced the hiring of the developers behind Bukkit, a popular developer API for Minecraft servers, to improve Minecraft's support of server modifications. This move included Mojang taking apparent ownership of the CraftBukkit server mod, though this apparent acquisition later became controversial, and its legitimacy was questioned due to CraftBukkit's open-source nature and licensing under the GNU General Public License and Lesser General Public License. In August 2011, Minecraft: Pocket Edition was released as an early alpha for the Xperia Play via the Android Market, later expanding to other Android devices on 8 October 2011. The iOS version followed on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. Unlike Java Edition, Pocket Edition initially focused on Minecraft's creative building and basic survival elements but lacked many features of the PC version. Bergensten confirmed on Twitter that the Pocket Edition was written in C++ rather than Java, as iOS does not support Java. On 10 December 2014, a port of Pocket Edition was released for Windows Phone 8.1. In July 2015, a port of the Pocket Edition to Windows 10 was released as the Windows 10 Edition, with full crossplay to other Pocket versions. In January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 20 September 2017, with the "Better Together Update", the Pocket Edition was ported to the Xbox One, and was renamed to the Bedrock Edition. The console versions of Minecraft debuted with the Xbox 360 edition, developed by 4J Studios and released on 9 May 2012. Announced as part of the Xbox Live Arcade NEXT promotion, this version introduced a redesigned crafting system, a new control interface, in-game tutorials, split-screen multiplayer, and online play via Xbox Live. Unlike the PC version, its worlds were finite, bordered by invisible walls. Initially, the Xbox 360 version resembled outdated PC versions but received updates to bring it closer to Java Edition before eventually being discontinued. The Xbox One version launched on 5 September 2014, featuring larger worlds and support for more players. Minecraft expanded to PlayStation platforms with PlayStation 3 and PlayStation 4 editions released on 17 December 2013 and 4 September 2014, respectively. Originally planned as a PS4 launch title, it was delayed before its eventual release. A PlayStation Vita version followed in October 2014. Like the Xbox versions, the PlayStation editions were developed by 4J Studios. Nintendo platforms received Minecraft: Wii U Edition on 17 December 2015, with a physical release in North America on 17 June 2016 and in Europe on 30 June. The Nintendo Switch version launched via the eShop on 11 May 2017. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition, based on the Pocket Edition, would be available for download immediately after the livestream, and a physical copy available on a later date. The game is compatible only with the New Nintendo 3DS or New Nintendo 2DS XL systems and does not work with the original 3DS or 2DS systems. On 20 September 2017, the Better Together Update introduced Bedrock Edition across Xbox One, Windows 10, VR, and mobile platforms, enabling cross-play between these versions. Bedrock Edition later expanded to Nintendo Switch and PlayStation 4, with the latter receiving the update in December 2019, allowing cross-platform play for users with a free Xbox Live account. The Bedrock Edition released a native version for PlayStation 5 on 22 October 2024, while the Xbox Series X/S version launched on 17 June 2025. On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as "Legacy Console Editions". On 15 January 2019, the New Nintendo 3DS version of Minecraft received its final update, effectively becoming discontinued as well. An educational version of Minecraft, designed for use in schools, launched on 1 November 2016. It is available on Android, ChromeOS, iPadOS, iOS, MacOS, and Windows. On 20 August 2018, Mojang announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that it would be operated by JD.com in China. On 26 June 2020, a public beta for the Education Edition was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020. On 20 May 2016, China Edition (also known as My World) was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile versions are based on the Bedrock Edition. The edition is free-to-play and had over 700 million registered accounts by September 2023. This version of Bedrock Edition is exclusive to Microsoft's Windows 10 and Windows 11 operating systems. The beta release for Windows 10 launched on the Windows Store on 29 July 2015. After nearly a year and a half in beta, Microsoft fully released the version on 19 December 2016. Called the "Ender Update", this release implemented new features to this version of Minecraft like world templates and add-on packs. On 7 June 2022, the Java and Bedrock Editions of Minecraft were merged into a single bundle for purchase on Windows; those who owned one version would automatically gain access to the other version. Both game versions would otherwise remain separate. Around 2011, prior to Minecraft's full release, Mojang collaborated with The Lego Group to create a Lego brick-based Minecraft game called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1×1 block to account for larger pieces typically used in Lego sets. Persson worked on an early version called "Project Rex Kwon Do", named after the character of the same name from the film Napoleon Dynamite. Although Lego approved the project and Mojang assigned two developers for six months, it was canceled due to the Lego Group's demands, according to Mojang's Daniel Kaplan. Lego considered buying Mojang to complete the game, but when Microsoft offered over $2 billion for the company, Lego stepped back, unsure of Minecraft's potential. On 26 June 2025, a build of Brickcraft dated 28 June 2012 was published on a community archive website Omniarchive. Initially, Markus Persson planned to support the Oculus Rift with a Minecraft port. However, after Facebook acquired Oculus in 2013, he abruptly canceled the plans, stating, "Facebook creeps me out." In 2016, a community-made mod, Minecraft VR, added VR support for Java Edition, followed by Vivecraft for HTC Vive. Later that year, Microsoft introduced official Oculus Rift support for Windows 10 Edition, leading to the discontinuation of the Minecraft VR mod due to trademark complaints. Vivecraft was endorsed by Minecraft VR contributors for its Rift support. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 Bedrock version would receive PlayStation VR support later that month. In September 2024, the Minecraft team announced they would no longer support PlayStation VR, which received its final update in March 2025. Music and sound design Minecraft's music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. To create the sound effects for the game, Rosenfeld made extensive use of Foley techniques. On learning the processes for the game, he remarked, "Foley's an interesting thing, and I had to learn its subtleties. Early on, I wasn't that knowledgeable about it. It's a whole trial-and-error process. You just make a sound and eventually you go, 'Oh my God, that's it! Get the microphone!' There's no set way of doing anything at all." He reminisced on creating the in-game sound for grass blocks, stating "It turns out that to make grass sounds you don't actually walk on grass and record it, because grass sounds like nothing. What you want to do is get a VHS, break it apart, and just lightly touch the tape." According to Rosenfeld, his favorite sound to design for the game was the hisses of spiders. He elaborates, "I like the spiders. Recording that was a whole day of me researching what a spider sounds like. Turns out, there are spiders that make little screeching sounds, so I think I got this recording of a fire hose, put it in a sampler, and just pitched it around until it sounded like a weird spider was talking to you." Many of the sound design decisions by Rosenfeld were done accidentally or spontaneously. The creeper notably lacks any specific noises apart from a loud fuse-like sound when about to explode; Rosenfeld later recalled "That was just a complete accident by Markus and me [sic]. We just put in a placeholder sound of burning a matchstick. It seemed to work hilariously well, so we kept it." On other sounds, such as those of the zombie, Rosenfeld remarked, "I actually never wanted the zombies so scary. I intentionally made them sound comical. It's nice to hear that they work so well [...]." Rosenfeld remarked that the sound engine was "terrible" to work with, remembering "If you had two song files at once, it [the game engine] would actually crash. There were so many more weird glitches like that the guys never really fixed because they were too busy with the actual game and not the sound engine." The background music in Minecraft consists of instrumental ambient music. To compose the music of Minecraft, Rosenfeld used the package from Ableton Live, along with several additional plug-ins. Speaking on them, Rosenfeld said "They can be pretty much everything from an effect to an entire orchestra. Additionally, I've got some synthesizers that are attached to the computer. Like a Moog Voyager, Dave Smith Prophet 08 and a Virus TI." On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which included the music that was added in a 2013 "Music Update" for the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. On 14 August 2020, Ghostly released Volume Beta on CD and vinyl, with alternate color LPs and lenticular cover pressings released in limited quantities. The final update Rosenfeld worked on was 2018's 1.13 Update Aquatic. His music remained the only music in the game until 2020's "Nether Update", introducing pieces from Lena Raine. Since then, other composers have made contributions, including Kumi Tanioka, Samuel Åberg, Aaron Cherof, and Amos Roddy, with Raine remaining as the new primary composer. Ownership of all music besides Rosenfeld's independently released albums has been retained by Microsoft, with their label publishing all of the other artists' releases. Gareth Coker also composed some of the music for the game's mini games from the Legacy Console editions. Rosenfeld had stated his intent to create a third album of music for the game in a 2015 interview with Fact, and confirmed its existence in a 2017 tweet, stating that his work on the record as of then had tallied up to be longer than the previous two albums combined, which in total clocks in at over 3 hours and 18 minutes. However, due to licensing issues with Microsoft, the third volume has since not seen release. On 8 January 2021, Rosenfeld was asked in an interview with Anthony Fantano whether or not there was still a third volume of his music intended for release. Rosenfeld responded, saying, "I have something—I consider it finished—but things have become complicated, especially as Minecraft is now a big property, so I don't know." Reception Minecraft has received critical acclaim, with praise for the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have expressed enjoyment in Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste". A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock Paper Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly. The Xbox One Edition was one of the best received ports, being praised for its relatively large worlds. The PlayStation 3 Edition also received generally favorable reviews, being compared to the Xbox 360 Edition and praised for its well-adapted controls. The PlayStation 4 edition was the best received port to date, being praised for having 36 times larger worlds than the PlayStation 3 edition and described as nearly identical to the Xbox One edition. The PlayStation Vita Edition received generally positive reviews from critics but was noted for its technical limitations. The Wii U version received generally positive reviews from critics but was noted for a lack of GamePad integration. The 3DS version received mixed reviews, being criticized for its high price, technical issues, and lack of cross-platform play. The Nintendo Switch Edition received fairly positive reviews from critics, being praised, like other modern ports, for its relatively larger worlds. Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content. Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. As of 10 October 2014[update], the game had sold 17 million copies on PC, becoming the best-selling PC game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 700 million registered accounts by September 2023. By 2023, the game had sold over 300 million copies. As of April 2025, Minecraft has sold over 350 million copies. The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. As of 4 April 2014[update], the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 Edition sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. As of 2022, the Vita version has sold over 1.65 million physical copies in Japan, making it the best-selling Vita game in the country. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter. The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million. In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013, it was nominated as the family game of the year at the British Academy Video Games Awards. During the 16th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated the Xbox 360 version of Minecraft for "Strategy/Simulation Game of the Year". Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer's The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list. Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards. Minecraft also won "Stream Game of the Year" at inaugural Streamer Awards in 2021. The game later garnered a Nickelodeon Kids' Choice Award nomination for Favorite Video Game in 2021, and won the same category in 2022 and 2023. At the Golden Joystick Awards 2025, it won the Still Playing Award - PC and Console. Minecraft has been subject to several notable controversies. In June 2014, Mojang announced that it would begin enforcing the portion of Minecraft's end-user license agreement (EULA) which prohibits servers from giving in-game advantages to players in exchange for donations or payments. Spokesperson Owen Hill stated that servers could still require players to pay a fee to access the server and could sell in-game cosmetic items. The change was supported by Persson, citing emails he received from parents of children who had spent hundreds of dollars on servers. The Minecraft community and server owners protested, arguing that the EULA's terms were more broad than Mojang was claiming, that the crackdown would force smaller servers to shut down for financial reasons, and that Mojang was suppressing competition for its own Minecraft Realms subscription service. The controversy contributed to Notch's decision to sell Mojang. In 2020, Mojang announced an eventual change to the Java Edition to require a login from a Microsoft account rather than a Mojang account, the latter of which would be sunsetted. This also required Java Edition players to create Xbox network Gamertags. Mojang defended the move to Microsoft accounts by saying that improved security could be offered, including two-factor authentication, blocking cyberbullies in chat, and improved parental controls. The community responded with intense backlash, citing various technical difficulties encountered in the process and how account migration would be mandatory, even for those who do not play on servers. As of 10 March 2022, Microsoft required that all players migrate in order to maintain access the Java Edition of Minecraft. Mojang announced a deadline of 19 September 2023 for account migration, after which all legacy Mojang accounts became inaccessible and unable to be migrated. In June 2022, Mojang added a player-reporting feature in Java Edition. Players could report other players on multiplayer servers for sending messages prohibited by the Xbox Live Code of Conduct; report categories included profane language,[l] substance abuse, hate speech, threats of violence, and nudity. If a player was found to be in violation of Xbox Community Standards, they would be banned from all servers for a specific period of time or permanently. The update containing the report feature (1.19.1) was released on 27 July 2022. Mojang received substantial backlash and protest from community members, one of the most common complaints being that banned players would be forbidden from joining any server, even private ones. Others took issue to what they saw as Microsoft increasing control over its player base and exercising censorship, leading some to start a hashtag #saveminecraft and dub the version "1.19.84", a reference to the dystopian novel Nineteen Eighty-Four. The "Mob Vote" was an online event organized by Mojang in which the Minecraft community voted between three original mob concepts; initially, the winning mob was to be implemented in a future update, while the losing mobs were scrapped, though after the first mob vote this was changed, and losing mobs would now have a chance to come to the game in the future. The first Mob Vote was held during Minecon Earth 2017 and became an annual event starting with Minecraft Live 2020. The Mob Vote was often criticized for forcing players to choose one mob instead of implementing all three, causing divisions and flaming within the community, and potentially allowing internet bots and Minecraft content creators with large fanbases to conduct vote brigading. The Mob Vote was also blamed for a perceived lack of new content added to Minecraft since Microsoft's acquisition of Mojang in 2014. The 2023 Mob Vote featured three passive mobs—the crab, the penguin, and the armadillo—with voting scheduled to start on 13 October. In response, a Change.org petition was created on 6 October, demanding that Mojang eliminate the Mob Vote and instead implement all three mobs going forward. The petition received approximately 445,000 signatures by 13 October and was joined by calls to boycott the Mob Vote, as well as a partially tongue-in-cheek "revolutionary" propaganda campaign in which sympathizers created anti-Mojang and pro-boycott posters in the vein of real 20th century propaganda posters. Mojang did not release an official response to the boycott, and the Mob Vote otherwise proceeded normally, with the armadillo winning the vote. In September 2024, as part of a blog post detailing their future plans for Minecraft's development, Mojang announced the Mob Vote would be retired. Cultural impact In September 2019, The Guardian classified Minecraft as the best video game of the 21st century to date, and in November 2019, Polygon called it the "most important game of the decade" in its 2010s "decade in review". In June 2020, Minecraft was inducted into the World Video Game Hall of Fame. Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development. Social media sites such as YouTube, Facebook, and Reddit have played a significant role in popularizing Minecraft. Research conducted by the Annenberg School for Communication at the University of Pennsylvania showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. The game would go on to be a prominent fixture within YouTube's gaming scene during the entire 2010s; in 2014, it was the second-most searched term on the entire platform. By 2018, it was still YouTube's biggest game globally. Some popular commentators have received employment at Machinima, a now-defunct gaming video company that owned a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at Minecon 2011 had the highest attendance. Another well-known YouTube personality is Jordan Maron, known online as CaptainSparklez, who has also created many Minecraft music parodies, including "Revenge", a parody of Usher's "DJ Got Us Fallin' in Love". Minecraft's popularity on YouTube was described by Polygon as quietly dominant, although in 2019, thanks in part to PewDiePie's playthroughs of the game, Minecraft experienced a visible uptick in popularity on the platform. Longer-running series include Far Lands or Bust, dedicated to reaching the obsolete "Far Lands" glitch by foot on an older version of the game. YouTube announced that on 14 December 2021 that the total amount of Minecraft-related views on the website had exceeded one trillion. Minecraft has been referenced by other video games, such as Torchlight II, Team Fortress 2, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, and FTL: Faster Than Light. Minecraft is officially represented in downloadable content for the crossover fighter Super Smash Bros. Ultimate, with Steve as a playable character with a moveset including references to building, crafting, and redstone, alongside an Overworld-themed stage. It was also referenced by electronic music artist Deadmau5 in his performances. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. In 2025, A Minecraft Movie was released. It made $313 million in the box office in the first week, a record-breaking opening for a video game adaptation. Minecraft has been noted as a cultural touchstone for Generation Z, as many of the generation's members played the game at a young age. The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design (CAD) and education. In a panel at Minecon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap. In September 2012, Mojang began the Block by Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements and is in the planning phase. The Block by Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions. In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at 171 meters (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft was around 192 meters above in-game sea level when the project was completed. Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders has used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people. Despite its unpredictable nature, Minecraft speedrunning, where players time themselves from spawning into a new world to reaching The End and defeating the Ender Dragon boss, is popular. Some speedrunners use a combination of mods, external programs, and debug menus, while other runners play the game in a more vanilla or more consistency-oriented way. Minecraft has been used in educational settings through initiatives such as MinecraftEdu, founded in 2011 to make the game affordable and accessible for schools in collaboration with Mojang. MinecraftEdu provided features allowing teachers to monitor student progress, including screenshot submissions as evidence of lesson completion, and by 2012 reported that approximately 250,000 students worldwide had access to the platform. Mojang also developed Minecraft: Education Edition with pre-built lesson plans for up to 30 students in a closed environment. Educators have used Minecraft to teach subjects such as history, language arts, and science through custom-built environments, including reconstructions of historical landmarks and large-scale models of biological structures such as animal cells. The introduction of redstone blocks enabled the construction of functional virtual machines such as a hard drive and an 8-bit computer. Mods have been created to use these mechanics for teaching programming. In 2014, the British Museum announced a project to reproduce its building and exhibits in Minecraft in collaboration with the public. Microsoft and Code.org have offered Minecraft-based tutorials and activities designed to teach programming, reporting by 2018 that more than 85 million children had used their resources. In 2025, the Musée de Minéralogie in Paris held a temporary exhibition titled "Minerals in Minecraft." Following the initial surge in popularity of Minecraft in 2010, other video games were criticised for having various similarities to Minecraft, and some were described as being "clones", often due to a direct inspiration from Minecraft, or a superficial similarity. Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, BlockWorld 3D, Total Miner, and Luanti (formerly Minetest). David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system. In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms not to officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded, with official Minecraft releases on Nintendo consoles eventually resuming. Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011. In 2025, Persson announced through a poll on his X account that he was considering developing a spiritual successor to Minecraft. He later clarified that he was "100% serious", and that he had "basically announced Minecraft 2". Within days, however, Persson cancelled the plans after speaking to his team. In November 2024, artificial intelligence companies Decart and Etched released Oasis, an artificially generated version of Minecraft, as a proof of concept. Every in-game element is completely AI-generated in real time and the model does not store world data, leading to "hallucinations" such as items and blocks appearing that were not there before. In January 2026, indie game developer Unomelon announced that their voxel sandbox game Allumeria would be playable in Steam Next Fest that year. On 10 February, Mojang issued a DMCA takedown of Allumeria on Steam through Valve, alleging the game was infringing on Minecraft's copyright. Some reports suggested that the takedown may have used an automatic AI copyright claiming service. The DMCA was later withdrawn. Minecon was an annual official fan convention dedicated to Minecraft. The first full Minecon was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community. In 2016, Minecon was held in-person for the last time, with the following years featuring annual "Minecon Earth" livestreams on minecraft.net and YouTube instead. These livestreams, later rebranded to "Minecraft Live", included the mob/biome votes, and announcements of new game updates. In 2025, "Minecraft Live" became a biannual event as part of Minecraft's changing update schedule.[citation needed] Notes References External links |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.